Non-blocking /dev/hidrawX reading with asyncio in Linux - python-3.x

what is the right spell to do something like:
async def read(fd):
return fd.readline()
with open('/dev/hidraw0', 'rb') as fd:
while True:
line = await read(fd)
if line is None:
break
consume(line)
I need to poll /dev/hidrawX from a program built around asyncio.
How can I do it in non-blocking fashion?
I would like to avoid to go the /dev/input/eventXX way with all associated conversion problems (and also because I tried and events are lost in transit)

Related

What happens when using python input() if no TTY?

I am trying to write an API client for Telegram using Telethon.
https://github.com/LonamiWebs/Telethon
If you create a TelegramClient(session) it prompts for input upon initialization if your session isn’t authorized.
This is great when manually running the program from the terminal, but what if I want to run it inside a daemon or cron job?
They are using the default Input method from python3 to gather the input. I don’t see any way in the library to specify a session file and check if it’s logged in that can be run before initializing a TelegramClient, and it’s the initializer that will prompt for input if not logged in.
This feels like a catch 22! Does anybody know if this might produce an error that could be caught? Or what happens when input() is run with no tty? Would it just hang? Could I add a timeout in that case?
Thanks in advance for helping me understand better!
You are affirming that the initialization of TelegramClient invokes the input function as default, but this is done inside the TelegramClient.start method (docs).
Taking the solution that you give at the end of your question is a fair aproach, so let's use a timeout when invoking input.
from asyncio import get_event_loop, wait_for, TimeoutError
from functools import partial
from telethon import TelegramClient
async def ainput(prompt):
"""Reads input from stdin in an async way"""
loop = get_event_loop()
await loop.run_in_executor(None, print, prompt)
return await loop.run_in_executor(None, input)
async def get_code(timeout):
"""Waits for the code from stdin with a timeout"""
try:
return await wait_for(
ainput("Please, type the code you received: "),
timeout=timeout
)
except TimeoutError:
pass
client = TelegramClient(session, api_id, api_hash).start(
phone=phone,
code_callback=partial(get_code, 30)
)
You should keep in mind that when you call start the arguments phone, and password also reads from stdin if it isn't provided a callable or default value, so you can handle them like in this example with code_callback.
In your case you can get the code from a POST to your API or in other way, just get creative and write the callable that fits your needs.

What is best practice to interact with subprocesses in python

I'm building an apllication which is intended to do a bulk-job processing data within another software. To control the other software automatically I'm using pyautoit, and everything works fine, except for application errors, caused from the external software, which occur from time to time.
To handle those cases, I built a watchdog:
It starts the script with the bulk job within a subprocess
process = subprocess.Popen(['python', job_script, src_path], stdout=subprocess.PIPE,
stderr=subprocess.PIPE, shell=True)
It listens to the system event using winevt.EventLog module
EventLog.Subscribe('System', 'Event/System[Level<=2]', handle_event)
In case of an error occurs, it shuts down everything and re-starts the script again.
Ok, if an system error event occurs, this event should get handled in a way, that the supprocess gets notified. This notification should then lead to the following action within the subprocess:
Within the subprocess there's an object controlling everything and continuously collecting
generated data. In order to not having to start the whole job from the beginnig, after re-starting the script, this object has to be dumped using pickle (which isn't the problem here!)
Listening to the system event from inside the subprocess didn't work. It results in a continuous loop, when calling subprocess.Popen().
So, my question is how I can either subscribe for system events from inside a childproces, or communicate between the parent and childprocess - means, sending a message like "hey, an errorocurred", listening within the subprocess and then creating the dump?
I'm really sorry not being allowed to post any code in this case. But I hope (and actually think), that my description should be understandable. My question is just about what module to use to accomplish this in the best way?
Would be really happy, if somebody could point me into the right direction...
Br,
Mic
I believe the best answer may lie here: https://docs.python.org/3/library/subprocess.html#subprocess.Popen.stdin
These attributes should allow for proper communication between the different processes fairly easily, and without any other dependancies.
Note that Popen.communicate() may suit better if other processes may cause issues.
EDIT to add example scripts:
main.py
from subprocess import *
import sys
def check_output(p):
out = p.stdout.readline()
return out
def send_data(p, data):
p.stdin.write(bytes(f'{data}\r\n', 'utf8')) # auto newline
p.stdin.flush()
def initiate(p):
#p.stdin.write(bytes('init\r\n', 'utf8')) # function to send first communication
#p.stdin.flush()
send_data(p, 'init')
return check_output(p)
def test(p, data):
send_data(p, data)
return check_output(p)
def main()
exe_name = 'Doc2.py'
p = Popen([sys.executable, exe_name], stdout=PIPE, stderr=STDOUT, stdin=PIPE)
print(initiate(p))
print(test(p, 'test'))
print(test(p, 'test2')) # testing responses
print(test(p, 'test3'))
if __name__ == '__main__':
main()
Doc2.py
import sys, time, random
def recv_data():
return sys.stdin.readline()
def send_data(data):
print(data)
while 1:
d = recv_data()
#print(f'd: {d}')
if d.strip() == 'test':
send_data('return')
elif d.strip() == 'init':
send_data('Acknowledge')
else:
send_data('Failed')
This is the best method I could come up with for cross-process communication. Also make sure all requests and responses don't contain newlines, or the code will break.

Why is run_forever required in this sample code?

In the python asyncio websockets library, the example calls run_forever(). Why is this required?
Shouldn't run_until_complete() block and run the websockets loop?
#!/usr/bin/env python
# WS server example
import asyncio
import websockets
async def hello(websocket, path):
name = await websocket.recv()
print(f"< {name}")
greeting = f"Hello {name}!"
await websocket.send(greeting)
print(f"> {greeting}")
start_server = websockets.serve(hello, "localhost", 8765)
asyncio.get_event_loop().run_until_complete(start_server)
asyncio.get_event_loop().run_forever()
# if you comment out the above line, this doesn't work, i.e., the server
# doesn't actually block waiting for data...
If I comment out run_forever(), the program ends immediately.
start_server is an awaitable returned by the library. Why isn't run_until_complete sufficient to cause it to block/await on hello()?
websockets.serve simply starts the server and exits immediately. (It still needs to be awaited because configuring the server can require network communication.) Because of that, you need to actually run the event loop.
Since the server is designed to run indefinitely, you cannot run the event loop in the usual way, by passing a coroutine to run_until_complete. As the server has already started, there is no coroutine to run, you just need to let the event loop run and do its job. This is where run_forever comes in useful - it tells the event loop to run (executing the tasks previously scheduled, such as those belonging to the server) indefinitely, or until told to stop by a call to loop.stop.
In Python 3.7 and later one should use asyncio.run to run asyncio code, which will create a new event loop, so the above trick won't work. A good way to accomplish the above in modern asyncio code would be to use the serve_forever method (untested):
async def my_server():
ws_server = await websockets.serve(hello, "localhost", 8765)
await ws_server.server.serve_forever()
asyncio.run(my_server())

track (un)succesfull object initialisation

I have a python script that controls different test-instruments (signal generator, amplifier, spectrum analyzer...) to automate a test.
These devices communicate over ethernet or serial with the pc running this python script.
I wrote a class for each device that I use. The script starts with initializing an instance of those classes. Something like this:
multimeter = Multimeter(192.168.1.5,5025)
amplifier = Amplifier(192.168.1.9,5025)
stirrer = Stirrer('COM4',9600)
.....
This can co wrong in many ways (battery is low, device not turned on, cable not connected ... )
It's possible to catch the errors with try/catch - try-except:
try:
multimeter = Multimeter(192.168.1.5,5025)
amplifier = Amplifier(192.168.1.9,5025)
stirrer = Stirrer('COM4',9600)
.....
except:
multimeter.close()
amplifier.close()
stirrer.close()
But now the problem is inside the except code block... We are not sure if the initialization of the objects succeeded and if they exist. They may not exist and so we can't call the close() method.
Because creating the instances is just normal sequential code, I know that when creating an instance of one of my classes fails, all the instances of the other classes previous to that line of code succeed. So you can try to create an instance of every class and check if that fails or not, and if it fails closing the connections of all previous objects.
try:
multimeter = Multimeter(192.168.1.5,5025)
except:
#problem with the multimeter
print('error')
try:
amplifier = Amplifier(192.168.1.9,5025)
except:
#problem with the amplifier, but we can close the multimeter
multimeter.close()
try:
stirrer = Stirrer('COM4',9600)
except:
#problem with the stirrer, but we can close the multimeter and the
amplifier
multimeter.close()
amplifier.close()
....
But I think this is ugly code? In particular when the number of objects (here test instruments grows, this becomes unmanageable. And it's sensitive for errors when you want to add or remove an object... Is there a better way to be sure that all connections are closed? Sockets should be closed on failure so we can assign the ip-address and port to a socket the next time the script is executed. Same with the serial interfaces, if it's not closed, it will raise an error to because you can't connect to a serial interface that already is open...
Use a container to store already created instruments, and split your code in short, independent, manageable parts:
def create_instruments(defs):
instruments = {}
for key, cls, params in instruments_defs:
try:
instruments[key] = cls(*params)
except Exception as e:
print("failed to instanciate '{}': {}".format(key, e))
close_instruments(instruments)
raise
return instruments
def close_instruments(intruments):
for key, instrument in intruments.items():
try:
instrument.close()
except Exception as e:
# just mention it - we can't do much more anyway
print("got error {} when closing {}".format(e, key))
instruments_defs = [
#(key, classname, (param1, ...)
("multimeter", Multimeter, ("192.168.1.5", 5025)),
("amplifier", Amplifier, ("192.168.1.9" ,5025)),
("stirrer", Stirrer, ('COM4',9600)),
]
instruments = create_instruments(instruments_defs)
You may also want to have a look at context managers (making sure resources are properly released is the main reason of context managers) but it might not necessarily be the best choice here (depends on how you use those objects, how your code is structured etc).
In fact, the solution that I'm suggesting in my question is the easiest way to solve this issue. In the try block, the script tries to initialize the instances one by one.
If you close the objects in the same order that they're created in the try block, then closing the connection will succeed for every test instrument, except for the instruments that where not initialized because of the error that happened in the try block.
(see comments in code snippet)
try:
multimeter = Multimeter(192.168.1.5,5025) #succes
amplifier = Amplifier(192.168.1.9,5025) #succes
stirrer = Stirrer('COM4',9600) # error COM4 is not available --> jump to except
generator = Generator() #not initialized because of error in stirrer init
otherTestInstrument = OtherTestInsrument() #not initialized because of error in stirrer init
.....
except:
multimeter.close() #initialized in try, so close() works
amplifier.close() #initialized in try, so close() works
stirrer.close() #probably initialized in try, so close() works probably
generator.close() #not initialized, will raise error, but doesn't matter.
otherTestInstrument.close() #also not initialized. No need to close it too.

Implementing a while loop without interrupting the bot's main event loop

I am in the middle of having two of my bots interfacing with each other via a ZMQ server, unfortunately that also requires a second loop for the receiver, so i started looking around the web for solutions and came up with this:
async def interfaceSocket():
while True:
message = socket.recv()
time.sleep(1)
socket.send(b"World")
await asyncio.sleep(3)
#client.event
async def on_ready():
print('logged in as:')
print(client.user.name)
client.loop.create_task(interfaceSocket())
client.run(TOKEN)
I basically added the interfaceSocket function to the event loop as a task as another while loop so i can constantly check the socket receiver while also checking the on_message listener from the discord bot itself but for some reason, the loop still interrupts the main event loop. Why is this?
Although interfaceSocket is technically a task, it doesn't await anything in its while loop and uses blocking calls such as socket.recv() and time.sleep(). Because of that it blocks the whole event loop while it waits for something to happen.
If socket refers to a ZMQ socket, you should be using the ZMQ asyncio interface, i.e. use zmq.asyncio.Context to create a zmq.asyncio.Socket instead of. Then interfaceSocket can use await and become a well-behaved coroutine:
async def interfaceSocket():
while True:
message = await socket.recv()
await asyncio.sleep(1)
await socket.send(b"World")

Resources