View MPU6050 data on Azure IoT Edge - azure

Objective:
View MPU6050 data on Azure IoT Edge
I would like to deploy a module to my IoT Edge Device. So in order to deploy MPU6050 sensor as a module, I am stuck up with the following doubts. It would be really helpful if someone could give me his/her insights on this as I am a newbie to Azure.
Current position:
Edge instance has been created on Azure portal and only "set modules" part is remaining. I have configured my Raspberry Pi to function as an edge device and can view listings present in Azure Edge. New registry has been created on Azure portal. Only pushing of my MPU6050-reading-image file onto the registry is remaining.
Doubts:
I have downloaded the SDK for python to customise it to read MPU6050 data. But I cannot understand the whole function on how it works. If there is any tutorial to create our own code to read any sensor data and build it would be very supportive. (I am unable to find any online)
I am aware on how to run a python file on docker. But how can this whole SDK be deployed onto Azure Registry so that I can just give a single link on the module deployment of edge device?
I am doubtful if I am going on the right track about the entire process. Correct me if I am wrong:
The iot-hub-sdk is configured to read MPU6050 data --> it is built and run on docker --> the local docker is pushed into Azure Registry that I have already created --> that registry link is copied and pasted into the edge device deployment --> That Edge instance is linked to my physical Edge device --> So when the Edge function is run I can see the entire sensor data on a locally connected Edge device that does not have internet connection
Any help or suggestion regarding any of my queries mentioned above would be really appreciated..
Thanks & Cheers!

There is a tutorial on how to create Python based modules for IoT Edge: https://learn.microsoft.com/en-us/azure/iot-edge/tutorial-python-module
As the tutorial suggests, I recommend to use Visual Studio Code with the IoT Edge extension. Then you get the Python module template, the Dockerfile etc. You can directly from VS Code push your custom module into your private container registry, e.g. Azure Container Registry and also set your deployment manifest (which module(s) to run on which Edge device).
As requested in the comments, I build a quick complete sample (did not test it though). The sample is just based on the the template sample when you create a new Python module using the VS code IoT Edge extension
import random
import time
import sys
import iothub_client
# pylint: disable=E0611
from iothub_client import IoTHubModuleClient, IoTHubClientError, IoTHubTransportProvider
from iothub_client import IoTHubMessage, IoTHubMessageDispositionResult, IoTHubError
# messageTimeout - the maximum time in milliseconds until a message times out.
# The timeout period starts at IoTHubModuleClient.send_event_async.
# By default, messages do not expire.
MESSAGE_TIMEOUT = 10000
# global counters
RECEIVE_CALLBACKS = 0
SEND_CALLBACKS = 0
# Choose HTTP, AMQP or MQTT as transport protocol. Currently only MQTT is supported.
PROTOCOL = IoTHubTransportProvider.MQTT
# Callback received when the message that we're forwarding is processed.
def send_confirmation_callback(message, result, user_context):
global SEND_CALLBACKS
print ( "Confirmation[%d] received for message with result = %s" % (user_context, result) )
map_properties = message.properties()
key_value_pair = map_properties.get_internals()
print ( " Properties: %s" % key_value_pair )
SEND_CALLBACKS += 1
print ( " Total calls confirmed: %d" % SEND_CALLBACKS )
class HubManager(object):
def __init__(
self,
protocol=IoTHubTransportProvider.MQTT):
self.client_protocol = protocol
self.client = IoTHubModuleClient()
self.client.create_from_environment(protocol)
# set the time until a message times out
self.client.set_option("messageTimeout", MESSAGE_TIMEOUT)
# Forwards the message received onto the next stage in the process.
def forward_event_to_output(self, outputQueueName, event, send_context):
self.client.send_event_async(
outputQueueName, event, send_confirmation_callback, send_context)
def main(protocol):
try:
print ( "\nPython %s\n" % sys.version )
print ( "IoT Hub Client for Python" )
hub_manager = HubManager(protocol)
print ( "Starting the IoT Hub Python sample using protocol %s..." % hub_manager.client_protocol )
print ( "The sample is now waiting for messages and will indefinitely. Press Ctrl-C to exit. ")
while True:
# Build the message with simulated telemetry values.
# Put your real sensor reading logic here instead
temperature = TEMPERATURE + (random.random() * 15)
humidity = HUMIDITY + (random.random() * 20)
msg_txt_formatted = MSG_TXT % (temperature, humidity)
message = IoTHubMessage(msg_txt_formatted)
hubManager.forward_event_to_output("output1", message, 0)
time.sleep(10)
except IoTHubError as iothub_error:
print ( "Unexpected error %s from IoTHub" % iothub_error )
return
except KeyboardInterrupt:
print ( "IoTHubModuleClient sample stopped" )
if __name__ == '__main__':
main(PROTOCOL)

Related

Read and notification issues with gattlib BLE?

I am writing a Linux application using the gattlib library in python3 to send and receive user inputted data between a BlueSnap DB9 BLE adapter and my Linux device. I have been able to successfully send a String of data to the adapter from my device and seen the output on the adapter's terminal, but I am having issues receiving data from the adapter.
I am following this example for reading and writing data using the gattlib library. I can write data using the write_cmd and write_by_handle functions but I am unable to read data or enable notifications with gattlib using any of the read functions mentioned there. Notifications don't appear to be enabled when using gattlib because the on_notification function I overwrote doesn't print out the print statement I added there.
I have determined that the handles for writing and reading data are 0x0043 and 0x0046, respectively. Here are the UUIDs for writing and reading that serialio provided to me: UUIDs.
When using bluetoothctl, after selecting the characteristic, I am able to write data to the adapter. Only after enabling notifications on bluetoothctl, only then am I able to read data as well. Once I disable notifications, attempting to read manually prints out all 0s instead of the data I want to read. What is the proper way to select a characteristic and enable notifications using gattlib in python3?
UPDATE: I was able to get notifications enabled. I ran hcidump on both bluetoothctl and my python code and determined the handle I had used for enabling notifications was incorrect. The correct handle for enabling notifications is 0x0047. Once I realized this mistake, I ran enable_notifications using the correct handle and set both parameters to True and was able to enable notifications and see incoming data on my device's terminal as I typed it on my adapter's terminal.
Not using gattlib but here is an example of using the Python3 D-Bus bindings and the GLib event loop to read, write and get notifications from GATT characteristics.
from time import sleep
import pydbus
from gi.repository import GLib
# Setup of device specific values
dev_id = 'DE:82:35:E7:43:BE'
adapter_path = '/org/bluez/hci0'
device_path = f"{adapter_path}/dev_{dev_id.replace(':', '_')}"
temp_reading_uuid = 'e95d9250-251d-470a-a062-fa1922dfa9a8'
temp_period_uuid = 'e95d1b25-251d-470a-a062-fa1922dfa9a8'
# Setup DBus informaton for adapter and remote device
bus = pydbus.SystemBus()
mngr = bus.get('org.bluez', '/')
adapter = bus.get('org.bluez', adapter_path)
device = bus.get('org.bluez', device_path)
# Connect to device (needs to have already been paired via bluetoothctl)
device.Connect()
# wait for GATT services to be discovered
while not device.ServicesResolved:
sleep(0.5)
# Some helper functions
def get_characteristic_path(device_path, uuid):
"""Find DBus path for UUID on a device"""
mng_objs = mngr.GetManagedObjects()
for path in mng_objs:
chr_uuid = mng_objs[path].get('org.bluez.GattCharacteristic1', {}).get('UUID')
if path.startswith(device_path) and chr_uuid == uuid:
return path
def as_int(value):
"""Create integer from bytes"""
return int.from_bytes(value, byteorder='little')
# Get a couple of characteristics on the device we are connected to
temp_reading_path = get_characteristic_path(device._path, temp_reading_uuid)
temp_period_path = get_characteristic_path(device._path, temp_period_uuid)
temp = bus.get('org.bluez', temp_reading_path)
period = bus.get('org.bluez', temp_period_path)
# Read value of characteristics
print(temp.ReadValue({}))
# [0]
print(period.ReadValue({}))
# [232, 3]
print(as_int(period.ReadValue({})))
# 1000
# Write a new value to one of the characteristics
new_value = int(1500).to_bytes(2, byteorder='little')
period.WriteValue(new_value, {})
# Enable eventloop for notifications
def temp_handler(iface, prop_changed, prop_removed):
"""Notify event handler for temperature"""
if 'Value' in prop_changed:
print(f"Temp value: {as_int(prop_changed['Value'])} \u00B0C")
mainloop = GLib.MainLoop()
temp.onPropertiesChanged = temp_handler
temp.StartNotify()
try:
mainloop.run()
except KeyboardInterrupt:
mainloop.quit()
temp.StopNotify()
device.Disconnect()

How to shut down CherryPy in no incoming connections for specified time?

I am using CherryPy to speak to an authentication server. The script runs fine if all the inputted information is fine. But if they make an mistake typing their ID the internal HTTP error screen fires ok, but the server keeps running and nothing else in the script will run until the CherryPy engine is closed so I have to manually kill the script. Is there some code I can put in the index along the lines of
if timer >10 and connections == 0:
close cherrypy (< I have a method for this already)
Im mostly a data mangler, so not used to web servers. Googling shows lost of hits for closing CherryPy when there are too many connections but not when there have been no connections for a specified (short) time. I realise the point of a web server is usually to hang around waiting for connections, so this may be an odd case. All the same, any help welcome.
Interesting use case, you can use the CherryPy plugins infrastrcuture to do something like that, take a look at this ActivityMonitor plugin implementation, it shutdowns the server if is not handling anything and haven't seen any request in a specified amount of time (in this case 10 seconds).
Maybe you have to adjust the logic on how to shut it down or do anything else in the _verify method.
If you want to read a bit more about the publish/subscribe architecture take a look at the CherryPy Docs.
import time
import threading
import cherrypy
from cherrypy.process.plugins import Monitor
class ActivityMonitor(Monitor):
def __init__(self, bus, wait_time, monitor_time=None):
"""
bus: cherrypy.engine
wait_time: Seconds since last request that we consider to be active.
monitor_time: Seconds that we'll wait before verifying the activity.
If is not defined, wait half the `wait_time`.
"""
if monitor_time is None:
# if monitor time is not defined, then verify half
# the wait time since the last request
monitor_time = wait_time / 2
super().__init__(
bus, self._verify, monitor_time, self.__class__.__name__
)
# use a lock to make sure the thread that triggers the before_request
# and after_request does not collide with the monitor method (_verify)
self._active_request_lock = threading.Lock()
self._active_requests = 0
self._wait_time = wait_time
self._last_request_ts = time.time()
def _verify(self):
# verify that we don't have any active requests and
# shutdown the server in case we haven't seen any activity
# since self._last_request_ts + self._wait_time
with self._active_request_lock:
if (not self._active_requests and
self._last_request_ts + self._wait_time < time.time()):
self.bus.exit() # shutdown the engine
def before_request(self):
with self._active_request_lock:
self._active_requests += 1
def after_request(self):
with self._active_request_lock:
self._active_requests -= 1
# update the last time a request was served
self._last_request_ts = time.time()
class Root:
#cherrypy.expose
def index(self):
return "Hello user: current time {:.0f}".format(time.time())
def main():
# here is how to use the plugin:
ActivityMonitor(cherrypy.engine, wait_time=10, monitor_time=5).subscribe()
cherrypy.quickstart(Root())
if __name__ == '__main__':
main()

How to lock virtualbox to get a screenshot through SOAP API

I'm trying to use the SOAP interface of Virtualbox 6.1 from Python to get a screenshot of a machine. I can start the machine but get locking errors whenever I try to retrieve the screen layout.
This is the code:
import zeep
# helper to show the session lock status
def show_lock_state(session_id):
session_state = service.ISession_getState(session_id)
print('current session state:', session_state)
# connect
client = zeep.Client('http://127.0.0.1:18083?wsdl')
service = client.create_service("{http://www.virtualbox.org/}vboxBinding", 'http://127.0.0.1:18083?wsdl')
manager_id = service.IWebsessionManager_logon('fakeuser', 'fakepassword')
session_id = service.IWebsessionManager_getSessionObject(manager_id)
# get the machine id and start it
machine_id = service.IVirtualBox_findMachine(manager_id, 'Debian')
progress_id = service.IMachine_launchVMProcess(machine_id, session_id, 'gui')
service.IProgress_waitForCompletion(progress_id, -1)
print('Machine has been started!')
show_lock_state(session_id)
# unlock and then lock to be sure, doesn't have any effect apparently
service.ISession_unlockMachine(session_id)
service.IMachine_lockMachine(machine_id, session_id, 'Shared')
show_lock_state(session_id)
console_id = service.ISession_getConsole(session_id)
display_id = service.IConsole_getDisplay(console_id)
print(service.IDisplay_getGuestScreenLayout(display_id))
The machine is started properly but the last line gives the error VirtualBox error: rc=0x80004001 which from what I read around means locked session.
I tried to release and acquire the lock again, but even though it succeeds the error remains. I went through the documentation but cannot find other types of locks that I'm supposed to use, except the Write lock which is not usable here since the machine is running. I could not find any example in any language.
I found an Android app called VBoxManager with this SOAP screenshot capability.
Running it through a MITM proxy I reconstructed the calls it performs and wrote them as the Zeep equivalent. In case anyone is interested in the future, the last lines of the above script are now:
console_id = service.ISession_getConsole(session_id)
display_id = service.IConsole_getDisplay(console_id)
resolution = service.IDisplay_getScreenResolution(display_id, 0)
print(f'display data: {resolution}')
image_data = service.IDisplay_takeScreenShotToArray(
display_id,
0,
resolution['width'],
resolution['height'],
'PNG')
with open('screenshot.png', 'wb') as f:
f.write(base64.b64decode(image_data))

Message properties seem to get lost after routing through Azure IoT edgeHub

I'm not sure if this is a bug or I am missing something. I also created an issue on GitHub some days before but with no resonance so far.
Here is my scenario:
I'm running a Raspberry Pi as a transparent IoT Edge Gateway with two custom modules in addition to the edgeAgent and edgeHub. The edgeHub is configured to route the messages coming from leaf device to one of the custom module with the route below.
FROM /messages/* WHERE NOT IS_DEFINED($connectionModuleId) INTO BrokeredEndpoint(\"/modules/camera-capture/inputs/input1\")
In the module I added a function which listens for incoming messages on input1 and I can see the messages and print the message body. In the leaf device application I'm sending messages via MQTT with application properties (see code snippet 1). When I change the route to...
FROM /messages/* WHERE (CameraState = 'true') INTO BrokeredEndpoint(\"/modules/camera-capture/inputs/input1\")
...only half of the messages are routed to the module which indicates that the property is found by the edgeHub and interpreted correctly. However, when I try to extract the properties of the message in the CameraCapture module (see code snippet 2) they seem to be empty (see console output).
So it seems like the message properties are getting lost after routing through the edge hub. Also same result using AMQP.
This is how I send the message (snippet 1):
client = IoTHubClient(CONNECTION_STRING, PROTOCOL)
set_certificates(client)
message = IoTHubMessage("test message")
# send a message every two seconds
while True:
# add custom application properties
prop_map = message.properties()
if run_camera:
prop_map.add_or_update("CameraState", "true")
else:
prop_map.add_or_update("CameraState", "false")
client.send_event_async(message, send_confirmation_callback, None)
print("Message transmitted to IoT Edge")
time.sleep(2)
This is the receiver (snippet 2):
def receive_message_callback(message, hubManager):
global RECEIVE_CALLBACKS
message_buffer = message.get_bytearray()
size = len(message_buffer)
print ( "Message received: %s" % message_buffer[:size].decode('utf-8'))
map_properties = message.properties()
key_value_pair = map_properties.get_internals()
print ("Key value pair: %s" % key_value_pair)
return IoTHubMessageDispositionResult.ACCEPTED
EDIT: Added Console logs:
Message received: test message
Key value pair: {}
Waiting...
Waiting...
Message received: test message
Key value pair: {}
Issue is known and tracked on github: https://github.com/Azure/azure-iot-sdk-python/issues/244

multiprocessing share object (socket) between process.

I would like to create a Process that store many objects(connect to devices via socket).
I have a GUI (PyQT5) that should store information about progress abut processes and status about devices. Example tell more:
# Process1
def conf1():
dev = some_signal_that_ask_about_dev("device1");
conf_dev(dev)
return_device("device1", dev)
# Process2
def conf2():
dev = some_signal_that_ask_about_dev("device2");
do_sth_withd_dev(dev)
return_device("device2", dev)
# Process3
class DevicesHolder(object):
self.devices = {
"device1": Device1("192.168.1.1", 8080),
"device2": Device2("192.168.1.2", 8081)
}
def some_signal_that_ask_about_dev(self, dev_name):
if self.devices[dev_name]:
dev = self.devices[dev_name]
# this device is taken by process.
# If process take device and faild. device should be recreated!
self.devices[dev_name] = None
return dev
def return_device(dev_name, dev):
self.devices[dev_name] = dev
def get_status_of_devices():
# Check connection to devices and return response
pass
# Process 4:
# GUI:
get_status_of_device();
So process1 and process2 do some work and sending progress to GUI. I would like to have info about devices status also.
Why just not create local object (process) and sending info from that process?
Process can run for a few seconds. When apps run for minutes. I want to know that there is connection problem before I press a start button. And all apps fail because of connection.
I think I am complicating simple problem. Help me!
More info
Every process is configuring sth else but on the same connection.
I would like to work this as quick as possible.
It will work on Linux. But I care about multi platform.

Resources