How to implement custom timeout for function that connects to server - python-3.x

I want to establish a connection with a b0 client for Coppelia Sim using the Python API. Unfortunately, this connection function does not have a timeout and will run indefinitely if it fails to connect.
To counter that, I tried moving the connection to a separate process (multiprocessing) and check after a couple of seconds, whether the process is still alive. If it still is, I kill the process and continue with the program.
This sort of works, as it does not block my program anymore, however the process does not stop when the connection is successfully made, so it kills the process, even when the connection succeeds.
How can I fix this and also write the b0client to the global variable?
def connection_function():
global b0client
b0client = b0RemoteApi.RemoteApiClient('b0RemoteApi_pythonClient', 'b0RemoteApi', 60)
print('Success!')
return 0
def establish_b0_connection(timeout):
connection_process = multiprocessing.Process(target=connection_function)
connection_process.start()
# Wait for [timeout] seconds or until process finishes
connection_process.join(timeout=timeout)
# If thread is still active
if connection_process.is_alive():
print('[INITIALIZATION OF B0 API CLIENT FAILED]')
# Terminate - may not work if process is stuck for good
connection_process.terminate()
# OR Kill - will work for sure, no chance for process to finish nicely however
# connection_process.kill()
connection_process.join()
print('[CONTINUING WITHOUT B0 API CLIENT]')
return False
else:
return True
if __name__ == '__main__':
b0client = None
establish_b0_connection(timeout=5)
# Continue with the code, with or without connection.
# ...

Related

PyZMQ segmentation fault, messages not arriving after restart via bash script

im currently facing an issue in which a proxy throws a segmentation error after a random period of time. Restarting the proxy with a bash script leads to messages not arriving.
I was sadly not able to recreate the issue. I am aware that this most likely is related to a partly wrong utilization of zmq and that the error gets thrown by c python.
The System runs 3 diffrent processes.
process, which task is to handle send data, which is a subscriber
context = zmq.Context()
socket = context.socket(zmq.SUB)
socket.connect("tcp://127.0.0.1:8100")
socket.setsockopt(zmq.SUBSCRIBE, b'')
poller = zmq.Poller()
poller.register(socket, zmq.POLLIN)
while True:
try:
socks = dict(poller.poll())
if socket in socks and socks[socket] == zmq.POLLIN:
action, values = socket.recv_pyobj()
##### handling of data #######
except Exception as e:
print(e)
process, being a proxy
def main():
context = zmq.Context()
# Socket facing clients
frontend = context.socket(zmq.XSUB)
frontend.bind("tcp://127.0.0.1:5557")
# Socket facing services
backend = context.socket(zmq.XPUB)
backend.bind("tcp://127.0.0.1:8100")
print("starting broker...")
while True:
try:
zmq.proxy(frontend, backend)
except KeyboardInterrupt:
print("stopping broker...")
frontend.close()
backend.close()
context.term()
quit()
except Exception as e:
print(f"failed with {e}")
if __name__ == "__main__":
main()
process running multiple threads which publish their data.
socket_pub = context.socket(zmq.PUB)
socket_pub.connect("tcp://127.0.0.1:5557")
while True:
# pre processing f.e. outgoing requests
######
# sending results
message = ["Action", {"foo": "bar"}]
socket_pub.send_pyobj(message)
As i was not able to recreate the error and therefor was not able to fix it i am trying to bypass it using a bash script.
The segementation error gets thrown in process nr. 2 (the proxy).
Thus the bash script simply restarts it.
#!/bin/bash
until python3 process2.py; do
echo "bridge broker crashed with exit code $?. Respawning.." >&2
sleep 1
done
The bash script correctly respawns the process if it died due to a segmentation fault.
But notifications from process 3 are not arriving in process 1 anymore.
I was not able to track down why this is happening. I rebuild it localy and if i manually restarted the proxy (without a segmentation fault) the messages directly arrived again.
Does anybody have a clue why this is happening or do i have to find the initial reason for the segmentation fault?

How to shut down CherryPy in no incoming connections for specified time?

I am using CherryPy to speak to an authentication server. The script runs fine if all the inputted information is fine. But if they make an mistake typing their ID the internal HTTP error screen fires ok, but the server keeps running and nothing else in the script will run until the CherryPy engine is closed so I have to manually kill the script. Is there some code I can put in the index along the lines of
if timer >10 and connections == 0:
close cherrypy (< I have a method for this already)
Im mostly a data mangler, so not used to web servers. Googling shows lost of hits for closing CherryPy when there are too many connections but not when there have been no connections for a specified (short) time. I realise the point of a web server is usually to hang around waiting for connections, so this may be an odd case. All the same, any help welcome.
Interesting use case, you can use the CherryPy plugins infrastrcuture to do something like that, take a look at this ActivityMonitor plugin implementation, it shutdowns the server if is not handling anything and haven't seen any request in a specified amount of time (in this case 10 seconds).
Maybe you have to adjust the logic on how to shut it down or do anything else in the _verify method.
If you want to read a bit more about the publish/subscribe architecture take a look at the CherryPy Docs.
import time
import threading
import cherrypy
from cherrypy.process.plugins import Monitor
class ActivityMonitor(Monitor):
def __init__(self, bus, wait_time, monitor_time=None):
"""
bus: cherrypy.engine
wait_time: Seconds since last request that we consider to be active.
monitor_time: Seconds that we'll wait before verifying the activity.
If is not defined, wait half the `wait_time`.
"""
if monitor_time is None:
# if monitor time is not defined, then verify half
# the wait time since the last request
monitor_time = wait_time / 2
super().__init__(
bus, self._verify, monitor_time, self.__class__.__name__
)
# use a lock to make sure the thread that triggers the before_request
# and after_request does not collide with the monitor method (_verify)
self._active_request_lock = threading.Lock()
self._active_requests = 0
self._wait_time = wait_time
self._last_request_ts = time.time()
def _verify(self):
# verify that we don't have any active requests and
# shutdown the server in case we haven't seen any activity
# since self._last_request_ts + self._wait_time
with self._active_request_lock:
if (not self._active_requests and
self._last_request_ts + self._wait_time < time.time()):
self.bus.exit() # shutdown the engine
def before_request(self):
with self._active_request_lock:
self._active_requests += 1
def after_request(self):
with self._active_request_lock:
self._active_requests -= 1
# update the last time a request was served
self._last_request_ts = time.time()
class Root:
#cherrypy.expose
def index(self):
return "Hello user: current time {:.0f}".format(time.time())
def main():
# here is how to use the plugin:
ActivityMonitor(cherrypy.engine, wait_time=10, monitor_time=5).subscribe()
cherrypy.quickstart(Root())
if __name__ == '__main__':
main()

multiprocessing share object (socket) between process.

I would like to create a Process that store many objects(connect to devices via socket).
I have a GUI (PyQT5) that should store information about progress abut processes and status about devices. Example tell more:
# Process1
def conf1():
dev = some_signal_that_ask_about_dev("device1");
conf_dev(dev)
return_device("device1", dev)
# Process2
def conf2():
dev = some_signal_that_ask_about_dev("device2");
do_sth_withd_dev(dev)
return_device("device2", dev)
# Process3
class DevicesHolder(object):
self.devices = {
"device1": Device1("192.168.1.1", 8080),
"device2": Device2("192.168.1.2", 8081)
}
def some_signal_that_ask_about_dev(self, dev_name):
if self.devices[dev_name]:
dev = self.devices[dev_name]
# this device is taken by process.
# If process take device and faild. device should be recreated!
self.devices[dev_name] = None
return dev
def return_device(dev_name, dev):
self.devices[dev_name] = dev
def get_status_of_devices():
# Check connection to devices and return response
pass
# Process 4:
# GUI:
get_status_of_device();
So process1 and process2 do some work and sending progress to GUI. I would like to have info about devices status also.
Why just not create local object (process) and sending info from that process?
Process can run for a few seconds. When apps run for minutes. I want to know that there is connection problem before I press a start button. And all apps fail because of connection.
I think I am complicating simple problem. Help me!
More info
Every process is configuring sth else but on the same connection.
I would like to work this as quick as possible.
It will work on Linux. But I care about multi platform.

python client recv only reciving on exit inside BGE

using python 3, I'm trying to send a file from a server to a client as soon as the client connects to the server, problem is that the client do only continue from recv when I close it (when the connection is closed)
I'm running the client in blender game engine, the client is running until it gets to recv, then it just stops, until i exit the game engine, then I can see that the console is receiving the bytes expected.
from other threads I have read that this might be bco the recv never gets an end, that's why I added "\n\r" to the end of my bytearray that the server is sending. but still, the client just stops at recv until I exit the application.
in the code below I'm only sending the first 6 bytes, these are to tell the client the size of the file. after this i intend to send data of the file on the same connection.
what am I doing wrong here?
client:
import socket
import threading
def TcpConnection():
TCPsocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
TCPsocket.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
server_address = ('localhost', 1338)
TCPsocket.connect(server_address)
print("TCP Socket open!, starting thread!")
ServerResponse = threading.Thread(target=TcpReciveMessageThread,args=(TCPsocket,))
ServerResponse.daemon = True
ServerResponse.start()
def TcpReciveMessageThread(Sock):
print("Tcp thread running!")
size = Sock.recv(6)#Sock.MSG_WAITALL
print("Recived data", size)
Sock.close()
Server:
import threading
import socket
import os
def StartTcpSocket():
server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_socket.bind(('localhost', 1338))
server_socket.listen(10)
while 1:
connection, client_address = server_socket.accept()
Response = threading.Thread(target=StartTcpClientThread,args=(connection,))
Response.daemon = True # thread dies when main thread (only non-daemon thread) exits.
Response.start()
def StartTcpClientThread(socket):
print("Sending data")
length = 42
l1 = ToByts(length)
socket.send(l1)
#loop that sends the file goes here
print("Data sent")
#socket.close()
def ToByts(Size):
byt_res = (Size).to_bytes(4,byteorder='big')
result = bytearray()
for r in byt_res:
result.append(r)
t = bytearray("\r\n","utf-8")
for b in t:
result.append(b)
return result
MessageListener = threading.Thread(target=StartTcpSocket)
MessageListener.daemon = True # thread dies when main thread (only non-daemon thread) exits.
MessageListener.start()
while 1:
pass
if the problem is that the client don't find a end of the stream, then how can solve this without closing the connection, as I intend to send the file on the same connection.
Update #1:
to clarify, the print in the client that say "recived" is printed first when I exit the ge (the client is closing). The loop that sends the file and recives it where left out of the question as they are not the problem. the problem still occurs without them, client freeze at recv until it is closed.
Update #2:
here are a image of what my consoles are printing when i run the server and client:
as you can see it is never printing the "Recived" print
when i exit the blender game engine, I get this output:
now, when the engine and the server script is exited/closed/finished i get the data printed. so recv is probably pausing the thread until the socket is closed, why are it doing this? and how can i get my data (and the print) before the socket is closing? This also happens if I set
ServerResponse.daemon = False
here are a .blend (on mediafire) of the client, the server running on python 3 (pypy). I'm using blender 2.78a
Update #3:
I tested and verified that the problem is the same on windows 10 and linux mint. I also made a Video showing the problem:
In the video you can see how I only receive data from the server when i exit blender ge. After some research I besinning to suspect that the problem is related to python threading not playing well with the bge.
https://www.youtube.com/watch?v=T5l9YGIoDYA
I have observed a similar phenomenon. It appears that the Python instance doesn't receive any execution cycles from Blender Game Engine (BGE) unless a controller gets invoked.
A simple solution is:
Add another Always sensor that is fired on every logic tick.
Add another Python controller that does nothing, a no-op.
Hook the sensor to the controller.
I applied this to your .blend as shown in the following screen capture.
I tested it by running your server and it seems to work OK.
Cheers, Jim

PyQT - QThread.sleep(...) in separate thread blocks UI

First I'll give a short description of the user interface and its functions before diving into the problem and code. Sorry in advance but I am unable to provide the complete code (even if I could it's...a lot of lines :D).
Description of the UI and what it does
I have a custom QWidget or to be more precise N instances of that custom widget aligned in a grid layout. Each instance of the widget has its own QThread which holds a worker QObject and a QTimer. In terms of UI components the widget contains two important components - a QLabel which visualizes a status and a QPushButton, which either starts (by triggering a start() slot in the Worker) or stops (by triggering a slot() slot in the worker) an external process. Both slots contain a 5s delay and also disable the push button during their execution. The worker itself not only controls the external process (through calls to the two slots mentioned above) but also checks if the process is running by a status() slot, which is triggered by the QTimer every 1s. As mentioned both the worker and timer live inside the thread! (I have double-checked that by printing the thread ID of the main (where the UI is) and the one of each worker (different from the main 100% sure).
In order to reduce the amount of calls from the UI to the worker and vice versa I decided to declare the _status attribute (which hold the state of the external process - inactive, running, error) of my Worker class as Q_PROPERTY with a setter, getter and notify the last being a signal triggered from within the setter IF and only if the value has changed from the old one. My previous design was much more signal/slot intensive since the status was emitted literally every second.
Now it's time for some code. I have reduced the code only to the parts which I deem to provide enough info and the location where the problem occurs:
Inside the QWidget
# ...
def createWorker(self):
# Create thread
self.worker_thread = QThread()
# Create worker and connect the UI to it
self.worker = None
if self.pkg: self.worker = Worker(self.cmd, self.pkg, self.args)
else: self.worker = Worker(cmd=self.cmd, pkg=None, args=self.args)
# Trigger attempt to recover previous state of external process
QTimer.singleShot(1, self.worker.recover)
self.worker.statusChanged_signal.connect(self.statusChangedReceived)
self.worker.block_signal.connect(self.block)
self.worker.recover_signal.connect(self.recover)
self.start_signal.connect(self.worker.start)
self.stop_signal.connect(self.worker.stop)
self.clear_error_signal.connect(self.worker.clear_error)
# Create a timer which will trigger the status slot of the worker every 1s (the status slot sends back status updates to the UI (see statusChangedReceived(self, status) slot))
self.timer = QTimer()
self.timer.setInterval(1000)
self.timer.timeout.connect(self.worker.status)
# Connect the thread to the worker and timer
self.worker_thread.finished.connect(self.worker.deleteLater)
self.worker_thread.finished.connect(self.timer.deleteLater)
self.worker_thread.started.connect(self.timer.start)
# Move the worker and timer to the thread...
self.worker.moveToThread(self.worker_thread)
self.timer.moveToThread(self.worker_thread)
# Start the thread
self.worker_thread.start()
#pyqtSlot(int)
def statusChangedReceived(self, status):
'''
Update the UI based on the status of the running process
:param status - status of the process started and monitored by the worker
Following values for status are possible:
- INACTIVE/FINISHED - visual indicator is set to INACTIVE icon; this state indicates that the process has stopped running (without error) or has never been started
- RUNNING - if process is started successfully visual indicator
- FAILED_START - occurrs if the attempt to start the process has failed
- FAILED_STOP - occurrs if the process wasn't stop from the UI but externally (normal exit or crash)
'''
#print(' --- main thread ID: %d ---' % QThread.currentThreadId())
if status == ProcStatus.INACTIVE or status == ProcStatus.FINISHED:
# ...
elif status == ProcStatus.RUNNING:
# ...
elif status == ProcStatus.FAILED_START:
# ...
elif status == ProcStatus.FAILED_STOP:
# ...
#pyqtSlot(bool)
def block(self, block_flag):
'''
Enable/Disable the button which starts/stops the external process
This slot is used for preventing the user to interact with the UI while starting/stopping the external process after a start/stop procedure has been initiated
After the respective procedure has been completed the button will be enabled again
:param block_flag - enable/disable flag for the button
'''
self.execute_button.setDisabled(block_flag)
# ...
Inside the Worker
# ...
#pyqtSlot()
def start(self):
self.block_signal.emit(True)
if not self.active and not self.pid:
self.active, self.pid = QProcess.startDetached(self.cmd, self.args, self.dir_name)
QThread.sleep(5)
# Check if launching the external process was successful
if not self.active or not self.pid:
self.setStatus(ProcStatus.FAILED_START)
self.block_signal(False)
self.cleanup()
return
self.writePidToFile()
self.setStatus(ProcStatus.RUNNING)
self.block_signal.emit(False)
#pyqtSlot()
def stop(self):
self.block_signal.emit(True)
if self.active and self.pid:
try:
kill(self.pid, SIGINT)
QThread.sleep(5) # <----------------------- UI freezes here
except OSError:
self.setStatus(ProcStatus.FAILED_STOP)
self.cleanup()
self.active = False
self.pid = None
self.setStatus(ProcStatus.FINISHED)
self.block_signal.emit(False)
#pyqtSlot()
def status(self):
if self.active and self.pid:
running = self.checkProcessRunning(self.pid)
if not running:
self.setStatus(ProcStatus.FAILED_STOP)
self.cleanup()
self.active = False
self.pid = None
def setStatus(self, status):
if self._status == status: return
#print(' --- main thread ID: %d ---' % QThread.currentThreadId())
self._status = status
self.statusChanged_signal.emit(self._status)
And now about my problem: I have noticed that the UI freezes ONLY whenever the stop() slot is triggered and the execution of the code goes through the QThread.sleep(5). I thought that this should also affect start but with multiple instances of my widget (each controlling its own thread with a worker and timer living in it) and all of that running the start works as intended - the push button, which is used to trigger the start() and stop() slots, gets disabled for 5 seconds and then gets enabled. With the stop() being triggered this doesn't happen at all.
I really cannot explain this behaviour. What's even worse is that the status updates that I am emitting through the Q_PROPERTY setter self.setStatus(...) get delayed due to this freezing which leads to some extra calls of my cleanup() function which basically deletes a generated file.
Any idea what is going on here? The nature of a slot and signal is that once a signal is emitted the slot connected to it is called right away. And since the UI runs in a different thread then the worker I don't see why all this is happening.
I actually corrected the spot where the problem was coming from in my question. In my original code I have forgotten the # before the pyqtSlot() of my stop() function. After adding it, it works perfectly fine. I had NO IDEA that such a thing can cause such huge problem!

Resources