multiprocessing share object (socket) between process. - python-3.x

I would like to create a Process that store many objects(connect to devices via socket).
I have a GUI (PyQT5) that should store information about progress abut processes and status about devices. Example tell more:
# Process1
def conf1():
dev = some_signal_that_ask_about_dev("device1");
conf_dev(dev)
return_device("device1", dev)
# Process2
def conf2():
dev = some_signal_that_ask_about_dev("device2");
do_sth_withd_dev(dev)
return_device("device2", dev)
# Process3
class DevicesHolder(object):
self.devices = {
"device1": Device1("192.168.1.1", 8080),
"device2": Device2("192.168.1.2", 8081)
}
def some_signal_that_ask_about_dev(self, dev_name):
if self.devices[dev_name]:
dev = self.devices[dev_name]
# this device is taken by process.
# If process take device and faild. device should be recreated!
self.devices[dev_name] = None
return dev
def return_device(dev_name, dev):
self.devices[dev_name] = dev
def get_status_of_devices():
# Check connection to devices and return response
pass
# Process 4:
# GUI:
get_status_of_device();
So process1 and process2 do some work and sending progress to GUI. I would like to have info about devices status also.
Why just not create local object (process) and sending info from that process?
Process can run for a few seconds. When apps run for minutes. I want to know that there is connection problem before I press a start button. And all apps fail because of connection.
I think I am complicating simple problem. Help me!
More info
Every process is configuring sth else but on the same connection.
I would like to work this as quick as possible.
It will work on Linux. But I care about multi platform.

Related

Appropriate way to run pytest unit tests for your API using threading.Thread and virtualports with socat

So I have written API for a device. The unit tests are going to run on CI automatically, therefore I will not test the connection with the device, purpose of these unit tests are to just test that my API generate appropriate requests and appropriately react to responses.
Before I had the following:
import serial
import threading
from src.device import Device # that is my API
class TestDevice:
#pytest.fixture(scope='class')
def device(self):
dev = Device()
dev.connect(port='/dev/ttyUSB0')
dev.connect() constantly sends command through serial port to establish handshake it will stay inside the function until response is received or timeout happens
So in order to simulate device, I have opened virtual serial port using socat:
socat -d -d pty,raw,echo=0 pty,raw,echo=0
My idea is to write into one virtual port and read from another. For that I would launch another threading and read from the message that has been sent, and upon thread receiving handshake request, I would sent a reply like this:
class TestDevice:
#pytest.fixture(scope='class')
def device(self):
reader_thread = threading.Thread(target=self.reader)
reader_thread.start()
dev = Device()
dev.connect('/dev/pts/3')
def reader(self):
EXPECTED_HANDSHAKE = b"hello"
HANDSHAKE_REPLY = b"hi"
timeout_handshake_ms = 1000
reader_port = serial.Serial(port='/dev/pts/4', baudrate=115200)
start_time_ns = time.time_ns()
timeout_time_ns = start_time_ns + (timeout_handshake_ms * 1e6)
while time.time_ns() < timeout_time_ns:
response = reader_port.read(1024)
# if dev.connect() sent an appropriate handshake request
# this port would receive it and then
if response == EXPECTED_HANDSHAKE:
reader_port.write(HANDSHAKE_REPLY)
And once the reply is received, dev.connect() will exit successfully and device will be considered successful. All of the code that I have posted works. As you can see, my approach is that I first start reading in a different thread, then I send a command, and in the reader thread I read the response and send appropriate response if applicable. The connection part was an easy one. However, I have 30 commands to test, all of them have different inputs, multiple arguments and etc. Reader's response also varies depending on the Request generated by API. Therefore, I will be needing to send same command with different arguments and I will need to reply to command in many different ways. What is the best way to organize my code, so I can test everything as possible as efficiently as possible. Do I need a thread for every command I am testing? Is there an efficient way to do all of this I have set out to?

How to implement custom timeout for function that connects to server

I want to establish a connection with a b0 client for Coppelia Sim using the Python API. Unfortunately, this connection function does not have a timeout and will run indefinitely if it fails to connect.
To counter that, I tried moving the connection to a separate process (multiprocessing) and check after a couple of seconds, whether the process is still alive. If it still is, I kill the process and continue with the program.
This sort of works, as it does not block my program anymore, however the process does not stop when the connection is successfully made, so it kills the process, even when the connection succeeds.
How can I fix this and also write the b0client to the global variable?
def connection_function():
global b0client
b0client = b0RemoteApi.RemoteApiClient('b0RemoteApi_pythonClient', 'b0RemoteApi', 60)
print('Success!')
return 0
def establish_b0_connection(timeout):
connection_process = multiprocessing.Process(target=connection_function)
connection_process.start()
# Wait for [timeout] seconds or until process finishes
connection_process.join(timeout=timeout)
# If thread is still active
if connection_process.is_alive():
print('[INITIALIZATION OF B0 API CLIENT FAILED]')
# Terminate - may not work if process is stuck for good
connection_process.terminate()
# OR Kill - will work for sure, no chance for process to finish nicely however
# connection_process.kill()
connection_process.join()
print('[CONTINUING WITHOUT B0 API CLIENT]')
return False
else:
return True
if __name__ == '__main__':
b0client = None
establish_b0_connection(timeout=5)
# Continue with the code, with or without connection.
# ...

How to shut down CherryPy in no incoming connections for specified time?

I am using CherryPy to speak to an authentication server. The script runs fine if all the inputted information is fine. But if they make an mistake typing their ID the internal HTTP error screen fires ok, but the server keeps running and nothing else in the script will run until the CherryPy engine is closed so I have to manually kill the script. Is there some code I can put in the index along the lines of
if timer >10 and connections == 0:
close cherrypy (< I have a method for this already)
Im mostly a data mangler, so not used to web servers. Googling shows lost of hits for closing CherryPy when there are too many connections but not when there have been no connections for a specified (short) time. I realise the point of a web server is usually to hang around waiting for connections, so this may be an odd case. All the same, any help welcome.
Interesting use case, you can use the CherryPy plugins infrastrcuture to do something like that, take a look at this ActivityMonitor plugin implementation, it shutdowns the server if is not handling anything and haven't seen any request in a specified amount of time (in this case 10 seconds).
Maybe you have to adjust the logic on how to shut it down or do anything else in the _verify method.
If you want to read a bit more about the publish/subscribe architecture take a look at the CherryPy Docs.
import time
import threading
import cherrypy
from cherrypy.process.plugins import Monitor
class ActivityMonitor(Monitor):
def __init__(self, bus, wait_time, monitor_time=None):
"""
bus: cherrypy.engine
wait_time: Seconds since last request that we consider to be active.
monitor_time: Seconds that we'll wait before verifying the activity.
If is not defined, wait half the `wait_time`.
"""
if monitor_time is None:
# if monitor time is not defined, then verify half
# the wait time since the last request
monitor_time = wait_time / 2
super().__init__(
bus, self._verify, monitor_time, self.__class__.__name__
)
# use a lock to make sure the thread that triggers the before_request
# and after_request does not collide with the monitor method (_verify)
self._active_request_lock = threading.Lock()
self._active_requests = 0
self._wait_time = wait_time
self._last_request_ts = time.time()
def _verify(self):
# verify that we don't have any active requests and
# shutdown the server in case we haven't seen any activity
# since self._last_request_ts + self._wait_time
with self._active_request_lock:
if (not self._active_requests and
self._last_request_ts + self._wait_time < time.time()):
self.bus.exit() # shutdown the engine
def before_request(self):
with self._active_request_lock:
self._active_requests += 1
def after_request(self):
with self._active_request_lock:
self._active_requests -= 1
# update the last time a request was served
self._last_request_ts = time.time()
class Root:
#cherrypy.expose
def index(self):
return "Hello user: current time {:.0f}".format(time.time())
def main():
# here is how to use the plugin:
ActivityMonitor(cherrypy.engine, wait_time=10, monitor_time=5).subscribe()
cherrypy.quickstart(Root())
if __name__ == '__main__':
main()

Creating detached processes from celery worker/alternative solution?

I'm developing a web service that will be used as a "database as a service" provider. The goal is to have a flask based small web service, running on some host and "worker" processes running on different hosts owned by different teams. Whenever a team member comes and requests a new database I should create one on their host. Now the problem... The service I start must be running. The worker however might be restarted. Could happen 5 minutes could happen 5 days. A simple Popen won't do the trick because it'd create a child process and if the worker stops later on the Popen process is destroyed (I tried this).
I have an implementation that's using multiprocessing which works like a champ, sadly I cannot use this with celery. so out of luck there. I tried to get away from the multiprocessing library with double forking and named pipes. The most minimal sample I could produce:
def launcher2(working_directory, cmd, *args):
command = [cmd]
command.extend(list(args))
process = subprocess.Popen(command, cwd=working_directory, shell=False, start_new_session=True,
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
with open(f'{working_directory}/ipc.fifo', 'wb') as wpid:
wpid.write(process.pid)
#shared_task(bind=True, name="Test")
def run(self, cmd, *args):
working_directory = '/var/tmp/workdir'
if not os.path.exists(working_directory):
os.makedirs(working_directory, mode=0o700)
ipc = f'{working_directory}/ipc.fifo'
if os.path.exists(ipc):
os.remove(ipc)
os.mkfifo(ipc)
pid1 = os.fork()
if pid1 == 0:
os.setsid()
os.umask(0)
pid2 = os.fork()
if pid2 > 0:
sys.exit(0)
os.setsid()
os.umask(0)
launcher2(working_directory, cmd, *args)
else:
with os.fdopen(os.open(ipc, flags=os.O_NONBLOCK | os.O_RDONLY), 'rb') as ripc:
readers, _, _ = select.select([ripc], [], [], 15)
if not readers:
raise TimeoutError(60, 'Timed out', ipc)
reader = readers.pop()
pid = struct.unpack('I', reader.read())[0]
pid, status = os.waitpid(pid, 0)
print(status)
if __name__ == '__main__':
async_result = run.apply_async(('/usr/bin/sleep', '15'), queue='q2')
print(async_result.get())
My usecase is more complex but I don't think anyone would want to read 200+ lines of bootstrapping, but this fails exactly on the same way. On the other hand I don't wait for the pid unless that's required so it's like start the process on request and let it do it's job. Bootstrapping a database takes roughly a minute with the full setup, and I don't want the clients standing by for a minute. Request comes in, I spawn the process and send back an id for the database instance, and the client can query the status based on the received instance id. However with the above forking solution I get:
[2020-01-20 18:03:17,760: INFO/MainProcess] Received task: Test[dbebc31c-7929-4b75-ae28-62d3f9810fd9]
[2020-01-20 18:03:20,859: ERROR/MainProcess] Process 'ForkPoolWorker-2' pid:16634 exited with 'signal 15 (SIGTERM)'
[2020-01-20 18:03:20,877: ERROR/MainProcess] Task handler raised error: WorkerLostError('Worker exited prematurely: signal 15 (SIGTERM).')
Traceback (most recent call last):
File "/home/pupsz/PycharmProjects/provider/venv37/lib/python3.7/site-packages/billiard/pool.py", line 1267, in mark_as_worker_lost
human_status(exitcode)),
billiard.exceptions.WorkerLostError: Worker exited prematurely: signal 15 (SIGTERM).
Which leaves me wondering, what might be going on. I tried an even more simple task:
#shared_task(bind=True, name="Test")
def run(self, cmd, *args):
working_directory = '/var/tmp/workdir'
if not os.path.exists(working_directory):
os.makedirs(working_directory, mode=0o700)
command = [cmd]
command.extend(list(args))
process = subprocess.Popen(command, cwd=working_directory, shell=False, start_new_session=True,
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
return process.wait()
if __name__ == '__main__':
async_result = run.apply_async(('/usr/bin/sleep', '15'), queue='q2')
print(async_result.get())
Which again fails with the very same error. Now I like Celery but from this it feels like it's not suited for my needs. Did I mess something up? Can it be achieved, what I need to do from a worker? Do I have any alternatives, or should I just write my own task queue?
Celery is not multiprocessing-friendly, so try to use billiard instead of multiprocessing (from billiard import Process etc...) I hope one day Celery guys do a heavy refactoring of that code, remove billiard, and start using multiprocessing instead...
So, until they move to multiprocessing we are stuck with billiard. My advice is to remove any usage of multiprocessing in your Celery tasks, and start using billiard.context.Process and similar, depending on your use-case.

How to lock virtualbox to get a screenshot through SOAP API

I'm trying to use the SOAP interface of Virtualbox 6.1 from Python to get a screenshot of a machine. I can start the machine but get locking errors whenever I try to retrieve the screen layout.
This is the code:
import zeep
# helper to show the session lock status
def show_lock_state(session_id):
session_state = service.ISession_getState(session_id)
print('current session state:', session_state)
# connect
client = zeep.Client('http://127.0.0.1:18083?wsdl')
service = client.create_service("{http://www.virtualbox.org/}vboxBinding", 'http://127.0.0.1:18083?wsdl')
manager_id = service.IWebsessionManager_logon('fakeuser', 'fakepassword')
session_id = service.IWebsessionManager_getSessionObject(manager_id)
# get the machine id and start it
machine_id = service.IVirtualBox_findMachine(manager_id, 'Debian')
progress_id = service.IMachine_launchVMProcess(machine_id, session_id, 'gui')
service.IProgress_waitForCompletion(progress_id, -1)
print('Machine has been started!')
show_lock_state(session_id)
# unlock and then lock to be sure, doesn't have any effect apparently
service.ISession_unlockMachine(session_id)
service.IMachine_lockMachine(machine_id, session_id, 'Shared')
show_lock_state(session_id)
console_id = service.ISession_getConsole(session_id)
display_id = service.IConsole_getDisplay(console_id)
print(service.IDisplay_getGuestScreenLayout(display_id))
The machine is started properly but the last line gives the error VirtualBox error: rc=0x80004001 which from what I read around means locked session.
I tried to release and acquire the lock again, but even though it succeeds the error remains. I went through the documentation but cannot find other types of locks that I'm supposed to use, except the Write lock which is not usable here since the machine is running. I could not find any example in any language.
I found an Android app called VBoxManager with this SOAP screenshot capability.
Running it through a MITM proxy I reconstructed the calls it performs and wrote them as the Zeep equivalent. In case anyone is interested in the future, the last lines of the above script are now:
console_id = service.ISession_getConsole(session_id)
display_id = service.IConsole_getDisplay(console_id)
resolution = service.IDisplay_getScreenResolution(display_id, 0)
print(f'display data: {resolution}')
image_data = service.IDisplay_takeScreenShotToArray(
display_id,
0,
resolution['width'],
resolution['height'],
'PNG')
with open('screenshot.png', 'wb') as f:
f.write(base64.b64decode(image_data))

Resources