How to use pystemd to control systemd timedated ntp service? - python-3.x

I'm working on a python app that needs to get the NTPSynchronized parameter from system-timedated. I'd also like to be able to start and stop the NTP service by using the SetNTP method.
To communicate with timedated over d-bus I have been using this as reference: https://www.freedesktop.org/wiki/Software/systemd/timedated/
I previously got this working with dbus-python, but have since learned that this library has been deprecated. I tried the dbus_next package, but that does not have support for Python 3.5, which I need.
I came across the pystemd package, but I am unsure if this can be used to do what I want. The only documentation I have been able to find is this example (https://github.com/facebookincubator/pystemd), but I can not figure out how to use this to work with system-timedated.
Here is the code I have that works with dbus-python:
import dbus
BUS_NAME = 'org.freedesktop.timedate1`
IFACE = 'org.freedesktop.timedate1`
bus = dbus.SystemBus()
timedate_obj = bus.get_object(BUS_NAME, '/org/freedesktop/timedate1')
# Get synchronization value
is_sync = timedate_obj.Get(BUS_NAME, 'NTPSynchronized', dbus_interface=dbus.PROPERTIES_IFACE)
# Turn off NTP
timedate_obj.SetNTP(False,False, dbus_interface=IFACE)
Here's what I have so far with pystemd, but I don't think I'm accessing it in the right way:
from pystemd.systemd1 import Unit
unit = Unit(b'systemd-timesyncd.service')
unit.load()
# Try to access properties
prop = unit.Properties
prop.NTPSynchronized
Running that I get:
Attribute Error: 'SDInterface' object has no attribute 'NTPSynchronized'
I have a feeling that either the service I entered is wrong, or the way I'm accessing properties is wrong, or even both are wrong.
Any help or advice is appreciated.

Looking at the source code, it appears that using the pystemd.systemd1 Unit object has a default destination of "org.freedesktop.systemd1" + the service name (https://github.com/facebookincubator/pystemd/blob/master/pystemd/systemd1/unit.py)
This is not what I want because I am trying to access "org.freedesktop.timedate1"
So instead I instantiated it's base class SDObject from pystemd.base (https://github.com/facebookincubator/pystemd/blob/master/pystemd/base.py)
The following code allowed me to get the sync status of NTP
from pystemd.base import SDObject
obj = SDObject(
destination=b'org.freedesktop.timedate1',
path=b'/org/freedesktop/timedate1',
bus=None,
_autoload=False
)
obj.load()
is_sync = obj.Properties.Get('org.freedesktop.timedate1','NTPSynchronized')
print(is_sync)
Not sure if this is what the library author intended, but hey it works!

Related

How to enable logging/tracing with Axum?

Learning Axum and I will like to add logging to a service I have put together, but unfortunately I cannot get it to work.
What I have done?
I added tower-http = { version = "0.3.5", features = ["trace"] } to Cargo.html and in the definition of the service I have this:
use tower_http::trace::TraceLayer;
let app = Router::new()
.route("/:name/path", axum::routing::get(handler))
.layer(TraceLayer::new_for_http())
But when I start the application and make requests to the endpoint, nothing gets logged.
Is there any configuration step I could have missed?
** Edit 1 **
As suggested in the comment, I added the tracing-subscriber crate:
tracing-subscriber = { version = "0.3"}
and updated the code as follows:
use tower_http::trace::TraceLayer;
use tower_http::trace::TraceLayer;
tracing_subscriber::fmt().init();
let app = Router::new()
.route("/:name/path", axum::routing::get(handler))
.layer(TraceLayer::new_for_http())
But yet, still no log output.
** Edit 2 **
Ok so what finally worked was
use tracing_subscriber::{layer::SubscriberExt, util::SubscriberInitExt};
and then initilize as follows:
tracing_subscriber::registry()
.with(tracing_subscriber::fmt::layer())
.init();
Even though this get the log output, I cannot explain why. So perhaps someone who understands how these crates work can give an explanation which I can accept as the answer to the question
You can get up and running quickly with the tracing-subscriber crate:
tracing_subscriber::fmt()
.with_max_level(tracing::Level::DEBUG)
.init();
The difference in the above attempts is simply a case of defaults. By default, TraceLayer will log with a DEBUG level. Using fmt() is configured with a default INFO logging level while registry().with(..).init() does not configure a log level filter.
You can also change the behavior of TraceLayer by using the customizable on_* methods.
See also:
How to use the tracing library? for more introductory tracing configurations
How to log and filter requests with Axum/Tokio? to help reduce the noise

Micropython and Bluetooth on ESP32

I know the support for bluetooth is still under development but it seems to cover everything I need at this point so I decided to give it a try.
I just want to simulate reading from a source of data (a EKG machine) so I came up with this code:
from ubluetooth import BLE
from ubluetooth import FLAG_READ, FLAG_NOTIFY, FLAG_WRITE
import time
ekg_data = [-305,-431,-131,440 ,1158,1424,1445,1623,1500,1018,142 ,-384,-324,-414,-77 ,334 ,-372,-154,366 ,7613,1461,1403,6133,-179,-381,-224,-135,-168,-208,-187,-181,-180,-160,-160,-151,-150,-151,-138,-141,-128,-118,-106,-798,-677,-430,-253,-122,98 ,133 ,281 ,354 ,390 ,519 ,475 ,558 ,565 ,533 ,593 ,458 ,377 ,107 ,-335,-719,-116,-129,-132,-131,-119,-122,-111,-106,-105,-935,-971,-877,-841,-841,-725,-757,-660,-641,-660,-554,-592,-496,-473,-486,-387,-431,-350,-364,-347,-208,-365,-362]
bt = BLE()
bt.active(True)
print('----')
print(bt.config('mac'))
print(bt.config('gap_name'))
HR_UUID = bluetooth.UUID(0x180D)
HR_CHAR = (bluetooth.UUID(0x2A37), bluetooth.FLAG_READ | bluetooth.FLAG_NOTIFY,)
HR_SERVICE = (HR_UUID, (HR_CHAR,),)
SERVICES = (HR_SERVICE,)
((ekg,),) = bt.gatts_register_services(SERVICES)
# bt.gap_advertise(100, 'MicroPython EKG')
count = 0
while True:
if count >= len(ekg_data):
count = 0
bt.gatts_write(ekg, ekg_data[count].to_bytes(2, 'big'))
print(ekg_data[count])
time.sleep_ms(1000)
count += 1
Now the code compiles and runs (I can see the output on the console) but I cannot find the device in my bluetooth app (I am using the nordic app)
Does anyone with more knowledge on that area can tell me if I am overlooking something? I tried to take the advertising off and on because I thought I might be overriding something with it but that didn't help too...
I think your code is missing multiple things.
First, you are not setting (irq) which is (Event Handling) for Micropython(As you can see from the docs or in their Github codes.
Also, I can't see you setting the buffer or any stuff like that, please revise the examples for what you asking here. Good job btw.

Change Device Wallpaper in Python/Kivy

I have a simple app and,among other things, I need this app to be able to change the wallpaper of a device on Android.
Now, I've looked around on the net and pyjnius seems like the obvious choice. The problem now is I don't know the first thing about java but a quick google search produces the WallpaperManager as something I could use.
Here's the question: How do I implement that wallpaper manager functionality on my kivy app with pyjnius.
Again, NOT a java dev so don't shoot
I don't know Java either but after examining some java examples i generated a solution. Don't forget to add SET_WALLPAPER permission to your buildozer.spec file. You also need to get storage permission to have this example work.
from jnius import autoclass, cast
PythonActivity = autoclass('org.kivy.android.PythonActivity')
try:
Environment = autoclass("android.os.Environment")
path = Environment.getExternalStorageDirectory().toString()
currentActivity = cast('android.app.Activity', PythonActivity.mActivity)
context = cast('android.content.Context', currentActivity.getApplicationContext())
File = autoclass('java.io.File')
file = File(path+"/test.jpg")
BitmapFactory = autoclass('android.graphics.BitmapFactory')
bitmap = BitmapFactory.decodeFile(file.getAbsolutePath())
WallpaperManager = autoclass('android.app.WallpaperManager')
manager = WallpaperManager.getInstance(context)
manager.setBitmap(bitmap)
except Exception as e:
print(e)

How to connect a sink to a external waveform port in REDHAWK?

I'm trying to write a unit test for a REDHAWK waveform. I would like to use stream sources to input data and stream/message sinks to store the output. I have written unit tests for components this way, but wanted to create a test for a waveform as well. I found a solution for connecting a StreamSource to a waveform's port, but have not been able to determine how to connect a sink to a waveform port.
For a source and a component (where self.comp is the component), normally one can use the following to connect them:
src = StreamSource(streamId='strm1', format='short')
src.connect(providesComponent=self.comp,
providesPortName='dataShort_in',
connectionId='testConn')
For a source and a waveform (where self.app is the waveform), I was able to get the following to work:
src = StreamSource(streamId='strm1', format='short')
src.connect(providesComponent=CorbaObject(self.app.getPort('dataShort_in')),
connectionId='testConn')
However, for a sink I would normally call connect on the component:
sink = StreamSink('short')
self.comp.connect(sink, usesPortName='dataShort_out')
I tried to use a similar approach as for the source case by getting the port from the waveform as below:
sink = StreamSink('short')
self.app.getPort('dataShort_out').connectPort(sink, 'outputConn')
However, this gives the error:
File "/usr/local/redhawk/core/lib/python/ossie/cf/Port_idl.py", line 86, in connectPort
return self._obj.invoke("connectPort", _0_CF.Port._d_connectPort, args)
BAD_PARAM: CORBA.BAD_PARAM(omniORB.BAD_PARAM_WrongPythonType, CORBA.COMPLETED_NO, ["Expecting object reference, got <class 'bulkio.sandbox.streamsink.StreamSink'>", "Operation 'connectPort' parameter 0"])
I am not sure how I can get a CORBA obj ref for the sink to use here. Or is there another approach I can use here to connect the port to the sink?
I am using REDHAWK 2.2.2 on Centos 7.
I think I have found a solution to my own question. I ended up creating a new class that manages port connections that works for both sinks and sources. I called it ConnectionManager (hopefully it won't be confused with the ossie.utils.model.connection.ConnectionManager class.
class ConnectionManager:
def __init__(self):
self.connections = list()
def clear(self):
del self.connections[:]
def connect(self, usesPort, providesPort, id):
usesPort.connectPort(providesPort, id)
self.connections.append( (usesPort, id))
def disconnectAll(self):
for port, id in self.connections:
port.disconnectPort(id)
self.clear()
Here's an example using a StreamSource (self.cm is a ConnectionManager):
strm = sb.StreamSource(streamID='strm1', format='short')
self.cm.connect(strm.getPort('shortOut'),
self.app.getPort('dataShort_in'),
'connID')
And an example using a StreamSink:
sink = sb.StreamSink('short')
self.cm.connect(self.app.getPort('dataShort_out'),
sink.getPort('shortIn'),
'conn2ID')
My unit test setUp method has a call to self.cm.clear() and the tearDown method a call to self.cm.disconnectAll() to clean up the connections after each test.
The only thing I don't understand is the names of the ports for the sink and source classes. Using the {format}{In|Out} names work, but I don't know why.
The same process that you applied for connecting a component to a sink applies to an application, as long as the application is a sandbox object rather than a CORBA one:
dom = redhawk.attach()
app = dom.apps[0]
sink = sb.StreamSink('short')
app.connect(sink)
The next code shows the names of the ports. In this case, there is just one of type short.
from pprint import pprint
pprint(sink._providesPortDict)
The code below shows the syntax for using a CORBA reference instead of a sandbox object.
sink_port = sink.getPort('shortIn')
ref = app.ref
ref.getPort('dataShort_out').connectPort(sink_port, 'outputConn')
You can run a waveform in the sandbox. Note that the waveform's components need to run on the local host.
Use the nodeBooter shell command or kickDomain from the redhawk Python package to start a domain manager and a device manager.
Sample code to run a waveform in the sandbox:
import os
from ossie.utils import redhawk, sb
dom = redhawk.attach()
SDRROOT = os.getenv('SDRROOT')
waveform_dir = os.path.join(SDRROOT, 'dom', 'waveforms')
waveform_name = os.listdir(waveform_dir)[0]
app = dom.createApplication(waveform_name)
sink = sb.StreamSink()
app.connect(sink)

Airflow: Using MySqlHook to get connection

I'm trying to get a connection object while using the MySqlHook. Assume I saved a mysql connection in the webserver admin called test_connection. What I've done:
mysql_hook = MySqlHook(conn_name_attr = 'test_connection')
conn = mysql_hook.get_conn()
Gives me an error: tuple' object has no attribute 'get_conn'
Any help would be very appreciated!
I am not sure where that code example comes from, especially the parameter conn_name_attr. It seems that the parameter is wrong.
After looking into the models and the hook itself, it seems to be
MySqlHook(mysql_conn_id='test_connection')
Also, if you get back a tuple try printing it since there might be an error message or other helpful information inside it.

Resources