Appropriate way to run pytest unit tests for your API using threading.Thread and virtualports with socat - multithreading

So I have written API for a device. The unit tests are going to run on CI automatically, therefore I will not test the connection with the device, purpose of these unit tests are to just test that my API generate appropriate requests and appropriately react to responses.
Before I had the following:
import serial
import threading
from src.device import Device # that is my API
class TestDevice:
#pytest.fixture(scope='class')
def device(self):
dev = Device()
dev.connect(port='/dev/ttyUSB0')
dev.connect() constantly sends command through serial port to establish handshake it will stay inside the function until response is received or timeout happens
So in order to simulate device, I have opened virtual serial port using socat:
socat -d -d pty,raw,echo=0 pty,raw,echo=0
My idea is to write into one virtual port and read from another. For that I would launch another threading and read from the message that has been sent, and upon thread receiving handshake request, I would sent a reply like this:
class TestDevice:
#pytest.fixture(scope='class')
def device(self):
reader_thread = threading.Thread(target=self.reader)
reader_thread.start()
dev = Device()
dev.connect('/dev/pts/3')
def reader(self):
EXPECTED_HANDSHAKE = b"hello"
HANDSHAKE_REPLY = b"hi"
timeout_handshake_ms = 1000
reader_port = serial.Serial(port='/dev/pts/4', baudrate=115200)
start_time_ns = time.time_ns()
timeout_time_ns = start_time_ns + (timeout_handshake_ms * 1e6)
while time.time_ns() < timeout_time_ns:
response = reader_port.read(1024)
# if dev.connect() sent an appropriate handshake request
# this port would receive it and then
if response == EXPECTED_HANDSHAKE:
reader_port.write(HANDSHAKE_REPLY)
And once the reply is received, dev.connect() will exit successfully and device will be considered successful. All of the code that I have posted works. As you can see, my approach is that I first start reading in a different thread, then I send a command, and in the reader thread I read the response and send appropriate response if applicable. The connection part was an easy one. However, I have 30 commands to test, all of them have different inputs, multiple arguments and etc. Reader's response also varies depending on the Request generated by API. Therefore, I will be needing to send same command with different arguments and I will need to reply to command in many different ways. What is the best way to organize my code, so I can test everything as possible as efficiently as possible. Do I need a thread for every command I am testing? Is there an efficient way to do all of this I have set out to?

Related

Can connection.AioHttpApp handle multiple requests?

I have an aiohttp application that I generated from a JSON openapi document.
It works well if I send the requests one at a time, for example:
curl -X GET http://localhost:8000/v1/foo
The problem is that I have a scenario that one function will take some time to complete and in between another call will come to the application, when that happens he aiohttp application cannot handle the new request and blocks.
I tried to test like the following:
I created a while True loop inside one call.
async def get_devices_count(request: web.Request) -> web.Response:
while 1:
time.sleep(2)
print("Still in devices count ...")
return web.Response(status=200)
I just print some info in another call.
async def get_device(request: web.Request) -> web.Response:
print("get device")
return web.Response(status=200)
The AioHttp application should be able to handle multiple requests.
Here how I run it:
$ python3 -m myapplication
======== Running on http://0.0.0.0:8000 ========
(Press CTRL+C to quit)
Now, when I request the while True it start running inside the loop:
curl -X GET http://localhost:8000/v1/devices/count
I see:
======== Running on http://0.0.0.0:8000 ========
(Press CTRL+C to quit)
Still in devices count ...
Still in devices count ...
Still in devices count ...
Still in devices count ...
Still in devices count ...
Still in devices count ...
Now when I try to request the other call it get stuck waiting and nothing get shown in the AioHttp console.
My question is, is this an aiohttp limitation ? Or is there a way to make it work ?
Thanks,
Talel

Direct communication between Javascript in Jupyter and server via IPython kernel

I'm trying to display an interactive mesh visualizer based on Three.js inside a Jupyter cell. The workflow is the following:
The user launches a Jupyter notebook, and open the viewer in a cell
Using Python commands, the user can manually add meshes and animate them interactively
In practice, the main thread is sending requests to a server via ZMQ sockets (every request needs a single reply), then the server sends back the desired data to the main thread using other socket pairs (many "request", very few replies expected), which finally uses communication through ipython kernel to send the data to the Javascript frontend. So far so good, and it works properly because the messages are all flowing in the same direction:
Main thread (Python command) [ZMQ REQ] -> [ZMQ REP] Server (Data) [ZMQ XREQ] -> [ZMQ XREQ] Main thread (Data) [IPykernel Comm] -> [Ipykernel Comm] Javascript (Display)
However, the pattern is different when I'm want to fetch the status of the frontend to wait for the meshes to finish loading:
Main thread (Status request) --> Server (Status request) --> Main thread (Waiting for reply)
| |
<--------------------------------Javascript (Processing) <--
This time, the server sends a request to the frontend, which in return does not send the reply directly back to the server, but to the main thread, that will forward the reply to the server, and finally to the main thread.
There is a clear issue: the main thread is supposed to jointly forward the reply of the frontend and receive the reply from the server, which is impossible. The ideal solution would be to enable the server to communicate directly with the frontend but I don't know how to do that, since I cannot use get_ipython().kernel.comm_manager.register_target on the server side. I tried to instantiate an ipython kernel client on the server side using jupyter_client.BlockingKernelClient, but I didn't manged to use it to communicate nor to register targets.
OK so I found a solution for now but it is not great. Indeed of just waiting for a reply and keep busy the main loop, I added a timeout and interleave it with do_one_iteration of the kernel to force to handle to messages:
while True:
try:
rep = zmq_socket.recv(flags=zmq.NOBLOCK).decode("utf-8")
except zmq.error.ZMQError:
kernel.do_one_iteration()
It works but unfortunately it is not really portable and it messes up with the Jupyter evaluation stack (all queued evaluations will be processed here instead of in order)...
Alternatively, there is another way that is more appealing:
import zmq
import asyncio
import nest_asyncio
nest_asyncio.apply()
zmq_socket.send(b"ready")
async def enforce_receive():
await kernel.process_one(True)
return zmq_socket.recv().decode("utf-8")
loop = asyncio.get_event_loop()
rep = loop.run_until_complete(enforce_receive())
but in this case you need to know in advance that you expect the kernel to receive exactly one message, and relying on nest_asyncio is not ideal either.
Here is a link to an issue on this topic of Github, along with an example notebook.
UPDATE
I finally manage to solve completely my issue, without shortcomings. The trick is to analyze every incoming messages. The irrelevant messages are put back in the queue in order, while the comm-related ones are processed on-the-spot:
class CommProcessor:
"""
#brief Re-implementation of ipykernel.kernelbase.do_one_iteration
to only handle comm messages on the spot, and put back in
the stack the other ones.
#details Calling 'do_one_iteration' messes up with kernel
'msg_queue'. Some messages will be processed too soon,
which is likely to corrupt the kernel state. This method
only processes comm messages to avoid such side effects.
"""
def __init__(self):
self.__kernel = get_ipython().kernel
self.qsize_old = 0
def __call__(self, unsafe=False):
"""
#brief Check once if there is pending comm related event in
the shell stream message priority queue.
#param[in] unsafe Whether or not to assume check if the number
of pending message has changed is enough. It
makes the evaluation much faster but flawed.
"""
# Flush every IN messages on shell_stream only
# Note that it is a faster implementation of ZMQStream.flush
# to only handle incoming messages. It reduces the computation
# time from about 10us to 20ns.
# https://github.com/zeromq/pyzmq/blob/e424f83ceb0856204c96b1abac93a1cfe205df4a/zmq/eventloop/zmqstream.py#L313
shell_stream = self.__kernel.shell_streams[0]
shell_stream.poller.register(shell_stream.socket, zmq.POLLIN)
events = shell_stream.poller.poll(0)
while events:
_, event = events[0]
if event:
shell_stream._handle_recv()
shell_stream.poller.register(
shell_stream.socket, zmq.POLLIN)
events = shell_stream.poller.poll(0)
qsize = self.__kernel.msg_queue.qsize()
if unsafe and qsize == self.qsize_old:
# The number of queued messages in the queue has not changed
# since it last time it has been checked. Assuming those
# messages are the same has before and returning earlier.
return
# One must go through all the messages to keep them in order
for _ in range(qsize):
priority, t, dispatch, args = \
self.__kernel.msg_queue.get_nowait()
if priority <= SHELL_PRIORITY:
_, msg = self.__kernel.session.feed_identities(
args[-1], copy=False)
msg = self.__kernel.session.deserialize(
msg, content=False, copy=False)
else:
# Do not spend time analyzing already rejected message
msg = None
if msg is None or not 'comm_' in msg['header']['msg_type']:
# The message is not related to comm, so putting it back in
# the queue after lowering its priority so that it is send
# at the "end of the queue", ie just at the right place:
# after the next unchecked messages, after the other
# messages already put back in the queue, but before the
# next one to go the same way. Note that every shell
# messages have SHELL_PRIORITY by default.
self.__kernel.msg_queue.put_nowait(
(SHELL_PRIORITY + 1, t, dispatch, args))
else:
# Comm message. Processing it right now.
comm_handler = getattr(
self.__kernel.comm_manager, msg['header']['msg_type'])
msg['content'] = self.__kernel.session.unpack(msg['content'])
comm_handler(None, None, msg)
self.qsize_old = self.__kernel.msg_queue.qsize()
process_kernel_comm = CommProcessor()

Getting Data from PolarH10 via BLE

I have been trying to get data from my PolarH10 with my raspberry-pi. I have been successfully getting data through the commandline with bluez, but have been unable to reproduce that in python. I am using pygatt(gatttool bindings) and python3.
I have been closely following the examples provided on bitbucket and was able to detect my device and filter out it's MAC address by filtering it by name. I however was unable to get either of the "reading data asyncronously" examples to work.
#This doesnt work...
req = gattlib.GATTRequester(mymac)
response = gattlib.GATTResponse()
req.read_by_handle_async(0x15, response) # what does the 0x15 mean?
while not response.received():
time.sleep(0.1)
steps = response.received()[0]
...
#This doesn't work either
class NotifyYourName(gattlib.GATTResponse):
def on_response(self, data):
print("your data is: {}".format(data))
response = NotifyYourName()
req = gattlib.GATTRequester(mymac)
req.read_by_handle_async(0x15, response)
while True:
# here, do other interesting things
time.sleep(1)
I don't know and cannot extract from the "documentation(s)" how to subscribe to/read notifications from a characteristic(heart rate) of my sensor(PolarH10). The error I am getting is when calling GATTRequester.connect(True) is
RuntimeError: Channel or attrib not ready.
Please tell me how correctly connect to a BLE device via Python on Debian and how to programatically identify offered services and their characteristics and how to get their notifications asyncronously in python using gattlib(pygatt) or any other library. Thanks!
The answer is: Just use bleak.
I have a device that presents the same behavior. In my case, the problem was that it does not have a channel of type public, I should use random instead (like in gatttool -b BE:BA:CA:FE:BA:BE -I -t random).
Just calling the connect() method with the parameter channel_type to random could fix it:
requester.connect(True, channel_type="random")
PD: Sorry for the late response (maybe it will be helpful to others).

receive SMS using python and python-smpp

I'm a newbie in SMPP but I need to simulate traffic over the SMPP protocol. I have found the tutorial how to send SMS using smpp lib from Python How to Send SMS using SMPP Protocol
I'm trying to write a receiver,but I am unable to get it to work. Please help.
My code is:
import smpplib
class ClientCl():
client=None
def receive_SMS(self):
client=smpplib.client.Client('localhost',1000)
try:
client.connect()
client.bind_receiver("sysID","login","password")
sms=client.get_message()
print(sms)
except :
print("boom! nothing works")
pass
sms_getter=ClientCl.receive_SMS
From what I can understand the smpplib you are using is the one available at github. Looking at your code and the client code, I can't find the function client.get_message. Perhaps you have an older version of the library? Or I have the wrong library. In any case, it is likely that the get_message function does not block and wait for the message to arrive.
Looking at the client code it seems that you have two options:
Poll the library until you get a valid message
Setup the library to listen to the SMPP port and call a function once a message arrives.
If you look at the README.md file it shows how you can setup the library to implement the second option (which is the better option).
...
client = smpplib.client.Client('example.com', SOMEPORTNUMBER)
# Print when obtain message_id
client.set_message_sent_handler(
lambda pdu: sys.stdout.write('sent {} {}\n'.format(pdu.sequence, pdu.message_id)))
client.set_message_received_handler(
lambda pdu: sys.stdout.write('delivered {}\n'.format(pdu.receipted_message_id)))
client.connect()
client.bind_transceiver(system_id='login', password='secret')
for part in parts:
pdu = client.send_message(
source_addr_ton=smpplib.consts.SMPP_TON_INTL,
#source_addr_npi=smpplib.consts.SMPP_NPI_ISDN,
# Make sure it is a byte string, not unicode:
source_addr='SENDERPHONENUM',
dest_addr_ton=smpplib.consts.SMPP_TON_INTL,
#dest_addr_npi=smpplib.consts.SMPP_NPI_ISDN,
# Make sure thease two params are byte strings, not unicode:
destination_addr='PHONENUMBER',
short_message=part,
data_coding=encoding_flag,
esm_class=msg_type_flag,
registered_delivery=True,
)
print(pdu.sequence)
client.listen()
...
When receiving a message or delivery receipt the function defined in client.set_message_received_handler() will be called. In the example, it is a lambda function. There is also an example on how to set up for listening in a thread.
If you prefer the simpler polling option you should use the poll function. For the simplest implementation all you need to do is:
while True:
client.Poll()
As before, the function set in client.set_message_received_handler() will be called once a message arrives.

python client recv only reciving on exit inside BGE

using python 3, I'm trying to send a file from a server to a client as soon as the client connects to the server, problem is that the client do only continue from recv when I close it (when the connection is closed)
I'm running the client in blender game engine, the client is running until it gets to recv, then it just stops, until i exit the game engine, then I can see that the console is receiving the bytes expected.
from other threads I have read that this might be bco the recv never gets an end, that's why I added "\n\r" to the end of my bytearray that the server is sending. but still, the client just stops at recv until I exit the application.
in the code below I'm only sending the first 6 bytes, these are to tell the client the size of the file. after this i intend to send data of the file on the same connection.
what am I doing wrong here?
client:
import socket
import threading
def TcpConnection():
TCPsocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
TCPsocket.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
server_address = ('localhost', 1338)
TCPsocket.connect(server_address)
print("TCP Socket open!, starting thread!")
ServerResponse = threading.Thread(target=TcpReciveMessageThread,args=(TCPsocket,))
ServerResponse.daemon = True
ServerResponse.start()
def TcpReciveMessageThread(Sock):
print("Tcp thread running!")
size = Sock.recv(6)#Sock.MSG_WAITALL
print("Recived data", size)
Sock.close()
Server:
import threading
import socket
import os
def StartTcpSocket():
server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_socket.bind(('localhost', 1338))
server_socket.listen(10)
while 1:
connection, client_address = server_socket.accept()
Response = threading.Thread(target=StartTcpClientThread,args=(connection,))
Response.daemon = True # thread dies when main thread (only non-daemon thread) exits.
Response.start()
def StartTcpClientThread(socket):
print("Sending data")
length = 42
l1 = ToByts(length)
socket.send(l1)
#loop that sends the file goes here
print("Data sent")
#socket.close()
def ToByts(Size):
byt_res = (Size).to_bytes(4,byteorder='big')
result = bytearray()
for r in byt_res:
result.append(r)
t = bytearray("\r\n","utf-8")
for b in t:
result.append(b)
return result
MessageListener = threading.Thread(target=StartTcpSocket)
MessageListener.daemon = True # thread dies when main thread (only non-daemon thread) exits.
MessageListener.start()
while 1:
pass
if the problem is that the client don't find a end of the stream, then how can solve this without closing the connection, as I intend to send the file on the same connection.
Update #1:
to clarify, the print in the client that say "recived" is printed first when I exit the ge (the client is closing). The loop that sends the file and recives it where left out of the question as they are not the problem. the problem still occurs without them, client freeze at recv until it is closed.
Update #2:
here are a image of what my consoles are printing when i run the server and client:
as you can see it is never printing the "Recived" print
when i exit the blender game engine, I get this output:
now, when the engine and the server script is exited/closed/finished i get the data printed. so recv is probably pausing the thread until the socket is closed, why are it doing this? and how can i get my data (and the print) before the socket is closing? This also happens if I set
ServerResponse.daemon = False
here are a .blend (on mediafire) of the client, the server running on python 3 (pypy). I'm using blender 2.78a
Update #3:
I tested and verified that the problem is the same on windows 10 and linux mint. I also made a Video showing the problem:
In the video you can see how I only receive data from the server when i exit blender ge. After some research I besinning to suspect that the problem is related to python threading not playing well with the bge.
https://www.youtube.com/watch?v=T5l9YGIoDYA
I have observed a similar phenomenon. It appears that the Python instance doesn't receive any execution cycles from Blender Game Engine (BGE) unless a controller gets invoked.
A simple solution is:
Add another Always sensor that is fired on every logic tick.
Add another Python controller that does nothing, a no-op.
Hook the sensor to the controller.
I applied this to your .blend as shown in the following screen capture.
I tested it by running your server and it seems to work OK.
Cheers, Jim

Resources