Message properties seem to get lost after routing through Azure IoT edgeHub - azure

I'm not sure if this is a bug or I am missing something. I also created an issue on GitHub some days before but with no resonance so far.
Here is my scenario:
I'm running a Raspberry Pi as a transparent IoT Edge Gateway with two custom modules in addition to the edgeAgent and edgeHub. The edgeHub is configured to route the messages coming from leaf device to one of the custom module with the route below.
FROM /messages/* WHERE NOT IS_DEFINED($connectionModuleId) INTO BrokeredEndpoint(\"/modules/camera-capture/inputs/input1\")
In the module I added a function which listens for incoming messages on input1 and I can see the messages and print the message body. In the leaf device application I'm sending messages via MQTT with application properties (see code snippet 1). When I change the route to...
FROM /messages/* WHERE (CameraState = 'true') INTO BrokeredEndpoint(\"/modules/camera-capture/inputs/input1\")
...only half of the messages are routed to the module which indicates that the property is found by the edgeHub and interpreted correctly. However, when I try to extract the properties of the message in the CameraCapture module (see code snippet 2) they seem to be empty (see console output).
So it seems like the message properties are getting lost after routing through the edge hub. Also same result using AMQP.
This is how I send the message (snippet 1):
client = IoTHubClient(CONNECTION_STRING, PROTOCOL)
set_certificates(client)
message = IoTHubMessage("test message")
# send a message every two seconds
while True:
# add custom application properties
prop_map = message.properties()
if run_camera:
prop_map.add_or_update("CameraState", "true")
else:
prop_map.add_or_update("CameraState", "false")
client.send_event_async(message, send_confirmation_callback, None)
print("Message transmitted to IoT Edge")
time.sleep(2)
This is the receiver (snippet 2):
def receive_message_callback(message, hubManager):
global RECEIVE_CALLBACKS
message_buffer = message.get_bytearray()
size = len(message_buffer)
print ( "Message received: %s" % message_buffer[:size].decode('utf-8'))
map_properties = message.properties()
key_value_pair = map_properties.get_internals()
print ("Key value pair: %s" % key_value_pair)
return IoTHubMessageDispositionResult.ACCEPTED
EDIT: Added Console logs:
Message received: test message
Key value pair: {}
Waiting...
Waiting...
Message received: test message
Key value pair: {}

Issue is known and tracked on github: https://github.com/Azure/azure-iot-sdk-python/issues/244

Related

Appropriate way to run pytest unit tests for your API using threading.Thread and virtualports with socat

So I have written API for a device. The unit tests are going to run on CI automatically, therefore I will not test the connection with the device, purpose of these unit tests are to just test that my API generate appropriate requests and appropriately react to responses.
Before I had the following:
import serial
import threading
from src.device import Device # that is my API
class TestDevice:
#pytest.fixture(scope='class')
def device(self):
dev = Device()
dev.connect(port='/dev/ttyUSB0')
dev.connect() constantly sends command through serial port to establish handshake it will stay inside the function until response is received or timeout happens
So in order to simulate device, I have opened virtual serial port using socat:
socat -d -d pty,raw,echo=0 pty,raw,echo=0
My idea is to write into one virtual port and read from another. For that I would launch another threading and read from the message that has been sent, and upon thread receiving handshake request, I would sent a reply like this:
class TestDevice:
#pytest.fixture(scope='class')
def device(self):
reader_thread = threading.Thread(target=self.reader)
reader_thread.start()
dev = Device()
dev.connect('/dev/pts/3')
def reader(self):
EXPECTED_HANDSHAKE = b"hello"
HANDSHAKE_REPLY = b"hi"
timeout_handshake_ms = 1000
reader_port = serial.Serial(port='/dev/pts/4', baudrate=115200)
start_time_ns = time.time_ns()
timeout_time_ns = start_time_ns + (timeout_handshake_ms * 1e6)
while time.time_ns() < timeout_time_ns:
response = reader_port.read(1024)
# if dev.connect() sent an appropriate handshake request
# this port would receive it and then
if response == EXPECTED_HANDSHAKE:
reader_port.write(HANDSHAKE_REPLY)
And once the reply is received, dev.connect() will exit successfully and device will be considered successful. All of the code that I have posted works. As you can see, my approach is that I first start reading in a different thread, then I send a command, and in the reader thread I read the response and send appropriate response if applicable. The connection part was an easy one. However, I have 30 commands to test, all of them have different inputs, multiple arguments and etc. Reader's response also varies depending on the Request generated by API. Therefore, I will be needing to send same command with different arguments and I will need to reply to command in many different ways. What is the best way to organize my code, so I can test everything as possible as efficiently as possible. Do I need a thread for every command I am testing? Is there an efficient way to do all of this I have set out to?

What is the "normal" way to feedback game actions to central server (MMO)

Basically I need to feed "events" back to the central server using gdscript. i.e. User picked up this, user dropped this, etc.... Im assuming the mobile phone holds an "event queue" that needs to be shipped off to the server. HTTPS is fine for my purposes. (A technique that would apply to any application that needs to share activity events between applications)
How does one implement a queue/thread in gdscript to handle this activity?
Im inclined to drop events into an sqlite database, then have some kind of "thread" that picks up and retries sending the events. Is this something that is normally coded from scratch? How do you do threads? If there are not threads, how do you handle when a http request fails, how do you ensure that something retries the message.
At this point in time, there does not appear to be a standardized/built in event queue style framework.
A simple class/node with an array acting as a queue works well with a simple function to queue messages. This demonstrates submitting a http request, where a callback is made to a function called http_result when the request is complete or fails.
http_request = HTTPRequest.new()
add_child(http_request)
http_request.connect("request_completed", self, "http_result")
http result handling:
func signin_status(result, response_code, headers, body):
if response_code == 200:
var data = parse_json(body.get_string_from_utf8())
print("json: ", data)
print("headers: ", headers)
else:
print("http response: ", response_code, " CODE: ", result, " data:", body.get_string_from_utf8())
remove_child(http_request)
http_request = null

How to publish without subscriber

After some testing with both pub/sub and xadd/xread I have came to a situation where I have realized that if my subscriber is not on, the message will not be recieved whenever I start up the subscriber. e.g. situation
You send a message via publish
You turn on your subscriber and listen for the channel 10 seconds after you have send the message via publish
The message will be lost.
There is two different codes that I have tried e.g.
Sub.py
import redis
import time
from config import configuration
client: redis = redis.Redis(
host=configuration.helheim.redis_host,
port=configuration.helheim.redis_port,
db=configuration.helheim.redis_database
)
while True:
test = client.xread({"sns": '$'}, None, 0)
print(test)
time.sleep(1)
Pub.py
import redis
from config import configuration
client: redis = redis.Redis(
host=configuration.helheim.redis_host,
port=configuration.helheim.redis_port,
db=configuration.helheim.redis_database
)
test = client.xadd("sns", {"status": "kill", "link": "https://www.sneakersnstuff.com/sv/product/49769/salomon-xa-alpine-mid-advanced"})
print(test)
Sub.py
EVENT_LISTENER.subscribe("sns")
while True:
message = EVENT_LISTENER.get_message()
if message and not message['data'] == 1:
message = json.loads(message['data'])
Pub.py
import redis
from config import configuration
client: redis = redis.Redis(
host=configuration.helheim.redis_host,
port=configuration.helheim.redis_port,
db=configuration.helheim.redis_database
)
channel = "sns"
client.publish(channel,
'{"status": "kill", "store": "sns", "link": "https://www.sneakersnstuff.com/sv/product/49769/salomon-xa-alpine-mid-advanced"}')
and it seems like there is no persist historical messages saved in the redis.
My question is, how am I able to read the messages that I have publish and remove after a read when I have turned on my subcriber?
Pub/sub never persists the messages. See What are the main differences between Redis Pub/Sub and Redis Stream?
Streams do persist the message, see https://redis.io/commands/xread
The problem is you are using xread with the special $ id, it only brings messages added after you call.
When blocking sometimes we want to receive just entries that are added to the stream via XADD starting from the moment we block. In such a case we are not interested in the history of already added entries. For this use case, we would have to check the stream top element ID, and use such ID in the XREAD command line. This is not clean and requires to call other commands, so instead it is possible to use the special $ ID to signal the stream that we want only the new things.
You may want to try with 0 on your first call, then use the last message ID.
If you want to avoid starting from zero in case of failure and you cannot persist the last message ID in your client, learn about https://redis.io/topics/streams-intro#consumer-groups

Direct communication between Javascript in Jupyter and server via IPython kernel

I'm trying to display an interactive mesh visualizer based on Three.js inside a Jupyter cell. The workflow is the following:
The user launches a Jupyter notebook, and open the viewer in a cell
Using Python commands, the user can manually add meshes and animate them interactively
In practice, the main thread is sending requests to a server via ZMQ sockets (every request needs a single reply), then the server sends back the desired data to the main thread using other socket pairs (many "request", very few replies expected), which finally uses communication through ipython kernel to send the data to the Javascript frontend. So far so good, and it works properly because the messages are all flowing in the same direction:
Main thread (Python command) [ZMQ REQ] -> [ZMQ REP] Server (Data) [ZMQ XREQ] -> [ZMQ XREQ] Main thread (Data) [IPykernel Comm] -> [Ipykernel Comm] Javascript (Display)
However, the pattern is different when I'm want to fetch the status of the frontend to wait for the meshes to finish loading:
Main thread (Status request) --> Server (Status request) --> Main thread (Waiting for reply)
| |
<--------------------------------Javascript (Processing) <--
This time, the server sends a request to the frontend, which in return does not send the reply directly back to the server, but to the main thread, that will forward the reply to the server, and finally to the main thread.
There is a clear issue: the main thread is supposed to jointly forward the reply of the frontend and receive the reply from the server, which is impossible. The ideal solution would be to enable the server to communicate directly with the frontend but I don't know how to do that, since I cannot use get_ipython().kernel.comm_manager.register_target on the server side. I tried to instantiate an ipython kernel client on the server side using jupyter_client.BlockingKernelClient, but I didn't manged to use it to communicate nor to register targets.
OK so I found a solution for now but it is not great. Indeed of just waiting for a reply and keep busy the main loop, I added a timeout and interleave it with do_one_iteration of the kernel to force to handle to messages:
while True:
try:
rep = zmq_socket.recv(flags=zmq.NOBLOCK).decode("utf-8")
except zmq.error.ZMQError:
kernel.do_one_iteration()
It works but unfortunately it is not really portable and it messes up with the Jupyter evaluation stack (all queued evaluations will be processed here instead of in order)...
Alternatively, there is another way that is more appealing:
import zmq
import asyncio
import nest_asyncio
nest_asyncio.apply()
zmq_socket.send(b"ready")
async def enforce_receive():
await kernel.process_one(True)
return zmq_socket.recv().decode("utf-8")
loop = asyncio.get_event_loop()
rep = loop.run_until_complete(enforce_receive())
but in this case you need to know in advance that you expect the kernel to receive exactly one message, and relying on nest_asyncio is not ideal either.
Here is a link to an issue on this topic of Github, along with an example notebook.
UPDATE
I finally manage to solve completely my issue, without shortcomings. The trick is to analyze every incoming messages. The irrelevant messages are put back in the queue in order, while the comm-related ones are processed on-the-spot:
class CommProcessor:
"""
#brief Re-implementation of ipykernel.kernelbase.do_one_iteration
to only handle comm messages on the spot, and put back in
the stack the other ones.
#details Calling 'do_one_iteration' messes up with kernel
'msg_queue'. Some messages will be processed too soon,
which is likely to corrupt the kernel state. This method
only processes comm messages to avoid such side effects.
"""
def __init__(self):
self.__kernel = get_ipython().kernel
self.qsize_old = 0
def __call__(self, unsafe=False):
"""
#brief Check once if there is pending comm related event in
the shell stream message priority queue.
#param[in] unsafe Whether or not to assume check if the number
of pending message has changed is enough. It
makes the evaluation much faster but flawed.
"""
# Flush every IN messages on shell_stream only
# Note that it is a faster implementation of ZMQStream.flush
# to only handle incoming messages. It reduces the computation
# time from about 10us to 20ns.
# https://github.com/zeromq/pyzmq/blob/e424f83ceb0856204c96b1abac93a1cfe205df4a/zmq/eventloop/zmqstream.py#L313
shell_stream = self.__kernel.shell_streams[0]
shell_stream.poller.register(shell_stream.socket, zmq.POLLIN)
events = shell_stream.poller.poll(0)
while events:
_, event = events[0]
if event:
shell_stream._handle_recv()
shell_stream.poller.register(
shell_stream.socket, zmq.POLLIN)
events = shell_stream.poller.poll(0)
qsize = self.__kernel.msg_queue.qsize()
if unsafe and qsize == self.qsize_old:
# The number of queued messages in the queue has not changed
# since it last time it has been checked. Assuming those
# messages are the same has before and returning earlier.
return
# One must go through all the messages to keep them in order
for _ in range(qsize):
priority, t, dispatch, args = \
self.__kernel.msg_queue.get_nowait()
if priority <= SHELL_PRIORITY:
_, msg = self.__kernel.session.feed_identities(
args[-1], copy=False)
msg = self.__kernel.session.deserialize(
msg, content=False, copy=False)
else:
# Do not spend time analyzing already rejected message
msg = None
if msg is None or not 'comm_' in msg['header']['msg_type']:
# The message is not related to comm, so putting it back in
# the queue after lowering its priority so that it is send
# at the "end of the queue", ie just at the right place:
# after the next unchecked messages, after the other
# messages already put back in the queue, but before the
# next one to go the same way. Note that every shell
# messages have SHELL_PRIORITY by default.
self.__kernel.msg_queue.put_nowait(
(SHELL_PRIORITY + 1, t, dispatch, args))
else:
# Comm message. Processing it right now.
comm_handler = getattr(
self.__kernel.comm_manager, msg['header']['msg_type'])
msg['content'] = self.__kernel.session.unpack(msg['content'])
comm_handler(None, None, msg)
self.qsize_old = self.__kernel.msg_queue.qsize()
process_kernel_comm = CommProcessor()

View MPU6050 data on Azure IoT Edge

Objective:
View MPU6050 data on Azure IoT Edge
I would like to deploy a module to my IoT Edge Device. So in order to deploy MPU6050 sensor as a module, I am stuck up with the following doubts. It would be really helpful if someone could give me his/her insights on this as I am a newbie to Azure.
Current position:
Edge instance has been created on Azure portal and only "set modules" part is remaining. I have configured my Raspberry Pi to function as an edge device and can view listings present in Azure Edge. New registry has been created on Azure portal. Only pushing of my MPU6050-reading-image file onto the registry is remaining.
Doubts:
I have downloaded the SDK for python to customise it to read MPU6050 data. But I cannot understand the whole function on how it works. If there is any tutorial to create our own code to read any sensor data and build it would be very supportive. (I am unable to find any online)
I am aware on how to run a python file on docker. But how can this whole SDK be deployed onto Azure Registry so that I can just give a single link on the module deployment of edge device?
I am doubtful if I am going on the right track about the entire process. Correct me if I am wrong:
The iot-hub-sdk is configured to read MPU6050 data --> it is built and run on docker --> the local docker is pushed into Azure Registry that I have already created --> that registry link is copied and pasted into the edge device deployment --> That Edge instance is linked to my physical Edge device --> So when the Edge function is run I can see the entire sensor data on a locally connected Edge device that does not have internet connection
Any help or suggestion regarding any of my queries mentioned above would be really appreciated..
Thanks & Cheers!
There is a tutorial on how to create Python based modules for IoT Edge: https://learn.microsoft.com/en-us/azure/iot-edge/tutorial-python-module
As the tutorial suggests, I recommend to use Visual Studio Code with the IoT Edge extension. Then you get the Python module template, the Dockerfile etc. You can directly from VS Code push your custom module into your private container registry, e.g. Azure Container Registry and also set your deployment manifest (which module(s) to run on which Edge device).
As requested in the comments, I build a quick complete sample (did not test it though). The sample is just based on the the template sample when you create a new Python module using the VS code IoT Edge extension
import random
import time
import sys
import iothub_client
# pylint: disable=E0611
from iothub_client import IoTHubModuleClient, IoTHubClientError, IoTHubTransportProvider
from iothub_client import IoTHubMessage, IoTHubMessageDispositionResult, IoTHubError
# messageTimeout - the maximum time in milliseconds until a message times out.
# The timeout period starts at IoTHubModuleClient.send_event_async.
# By default, messages do not expire.
MESSAGE_TIMEOUT = 10000
# global counters
RECEIVE_CALLBACKS = 0
SEND_CALLBACKS = 0
# Choose HTTP, AMQP or MQTT as transport protocol. Currently only MQTT is supported.
PROTOCOL = IoTHubTransportProvider.MQTT
# Callback received when the message that we're forwarding is processed.
def send_confirmation_callback(message, result, user_context):
global SEND_CALLBACKS
print ( "Confirmation[%d] received for message with result = %s" % (user_context, result) )
map_properties = message.properties()
key_value_pair = map_properties.get_internals()
print ( " Properties: %s" % key_value_pair )
SEND_CALLBACKS += 1
print ( " Total calls confirmed: %d" % SEND_CALLBACKS )
class HubManager(object):
def __init__(
self,
protocol=IoTHubTransportProvider.MQTT):
self.client_protocol = protocol
self.client = IoTHubModuleClient()
self.client.create_from_environment(protocol)
# set the time until a message times out
self.client.set_option("messageTimeout", MESSAGE_TIMEOUT)
# Forwards the message received onto the next stage in the process.
def forward_event_to_output(self, outputQueueName, event, send_context):
self.client.send_event_async(
outputQueueName, event, send_confirmation_callback, send_context)
def main(protocol):
try:
print ( "\nPython %s\n" % sys.version )
print ( "IoT Hub Client for Python" )
hub_manager = HubManager(protocol)
print ( "Starting the IoT Hub Python sample using protocol %s..." % hub_manager.client_protocol )
print ( "The sample is now waiting for messages and will indefinitely. Press Ctrl-C to exit. ")
while True:
# Build the message with simulated telemetry values.
# Put your real sensor reading logic here instead
temperature = TEMPERATURE + (random.random() * 15)
humidity = HUMIDITY + (random.random() * 20)
msg_txt_formatted = MSG_TXT % (temperature, humidity)
message = IoTHubMessage(msg_txt_formatted)
hubManager.forward_event_to_output("output1", message, 0)
time.sleep(10)
except IoTHubError as iothub_error:
print ( "Unexpected error %s from IoTHub" % iothub_error )
return
except KeyboardInterrupt:
print ( "IoTHubModuleClient sample stopped" )
if __name__ == '__main__':
main(PROTOCOL)

Resources