Why does systemd not send unit properties changed notifcations on session bus? - python-3.x

I implemented a dbus systemd listener in Python (3.7) that shall monitor property changes of a systemd unit. On the session dbus it does not receive any notifications. Running on the system dbus the code does what is expected.
Is there a way to also receive unit changed notifications on the session bus?
My system: A Raspberry 4 running latest version of Raspberry PI OS.
This is the service I created.
[Unit]
Description = A dummy service
[Service]
Type = simple
ExecStart = /bin/true
RemainAfterExit=yes
I installed the service to /etc/systemd/system and ~/.config/systemd/user and executed daemon-reload for the system and the user session. After doing, the service is known as user and as system service.
This is the dummy_listener.py code
#!/usr/bin/env python3
# Python version required: >= 3.7 (because of used asyncio API)
"""A simple subscriber/listener for systemd unit signals"""
import sys
import asyncio
from dbus_next.aio import MessageBus
from dbus_next import BusType
class DbusDummyService(): # pylint: disable=no-self-use
"""Asyncio based dummy.service listener"""
async def init(self, bus_type=BusType.SESSION):
"""Register listener callback with dbus bus_type"""
bus = await MessageBus(bus_type=bus_type).connect()
# Get introspection XML
introspection = await bus.introspect('org.freedesktop.systemd1',
'/org/freedesktop/systemd1/unit/dummy_2eservice')
# Select systemd service object
obj = bus.get_proxy_object('org.freedesktop.systemd1',
'/org/freedesktop/systemd1/unit/dummy_2eservice', introspection)
# Get required interfaces
properties_if = obj.get_interface('org.freedesktop.DBus.Properties')
# Monitor service status changes
properties_if.on_properties_changed(self.on_properties_changed_cb)
def on_properties_changed_cb(self, interface_name, changed_props, invalidated_props):
"""Callback expected to be called on unit property changes"""
print(f"Callback invoked for interface {interface_name}:")
print(f" Properties updated")
for prop, val in changed_props.items():
print(f" {prop} set to {val.value}")
print(f" Properties invalidated")
for prop in invalidated_props:
print(f" {prop} invalidated")
async def main(bus_type):
"""Asyncio main"""
# Initialize dbus listener
await DbusDummyService().init(bus_type)
# Run loop forever (waiting for dbus signals)
await asyncio.get_running_loop().create_future()
if __name__ == "__main__":
try:
BUS_TYPE = BusType.SYSTEM if 'sys' in sys.argv[1] else BusType.SESSION
except BaseException:
BUS_TYPE = BusType.SESSION
asyncio.run(main(BUS_TYPE))
The listener is ran like this on system dbus
sudo python3 dummy_lister.py sys
For session bus it is ran with
python3 dummy_lister.py
In a separate window I now restart the dummy services and expect the listener to output the prints.
For the session dbus:
systemctl --user restart dummy
For the system dbus:
sudo systemctl restart dummy
On the session dbus the listener just prints nothing. On the system dbus, I receive a bunch of messages.
Any ideas?

systemd doesn't send PropertiesChanged signals unless at least one client is subscribed to it. You need to call the Subscribe() method from the org.freedesktop.systemd1.Manager interface on the /org/freedesktop/systemd1 object.

Related

django and telegram bot

from telegram import InlineKeyboardButton, InlineKeyboardMarkup, Update
from telegram.ext import Updater, CommandHandler, CallbackQueryHandler, CallbackContext
import secrets
def start(update: Update, context: CallbackContext) -> None:
chat_id = update.effective_chat.id
context.bot.send_message(chat_id=chat_id,
text=f"Thank you for using our telegram bot! We will send you notifications here!")
def main():
updater = Updater('53049746:27b1xn8KRQdCdFERPVw7o')
updater.dispatcher.add_handler(CommandHandler('start', start))
# Start the Bot
updater.start_polling( )
# timeout=300
# Run the bot until the user presses Ctrl-C or the process receives SIGINT,
# SIGTERM or SIGABRT
updater.idle()
main()
This is code for my telegram bot, I run it like python3 bot.py and it works:
The question is: I have django project, so I need to run this bot.py in the background what is the best way to do it? (Right now I start my django project as python3 manage.py start server, later will use docker for it)
UPDATE:
I need bot.py to response to commands like /start, /info, /help, etc`
And I need django app to make some urls like mywebsite.com/send_telegram_msg?user_id=123123123123 which will trigger my bot to send msg

can not connect to Xserver while running python file using pyQT4 via ubuntu service

I am trying to execute python file which uses PyQT4
I am running below service file
[Unit]
Description = Test
After=multi-user.target
[Service]
Type=simple
ExecStart = /usr/bin/python3 /home/nvidia/main
Restart=on-abort
[Install]
WantedBy =multi-user.target
this file is located under /lib/systemd/system/test.service and I am starting this service by systemctl start Test
but starting of this service results in an error tagged cannot connect to X Server, failed with result exit-code
I am using the script
#!/usr/bin/python
#################################################################################################################################################
# Author = Rucha
# Version = V 2.0.3
# Class = PR01 OOP
# Module = pyqt4
# Date = Jan 02 2021
#################################################################################################################################################
import sys
from PyQt4 import QtGui
######################################################################################################################################
class MainWindow:
def __init__(self):
self.vbox = QtGui.QHBoxLayout()
def Title(self,Window,Name):
Window.setWindowTitle(Name)
def window(self):
app = QtGui.QApplication(sys.argv)
w = QtGui.QWidget()
w.setGeometry(800,800,500,500)
self.Title(w,"Test")
w.show()
sys.exit(app.exec_())
MainWindow1 = MainWindow()
MainWindow1.window()
Commonly a service dont have the same environment than a normal user who is able to login/start the X environment.
Therefore i guess DISPLAY is not set.
Try this in your service file but make sure it will start after X is already running...
ExecStart = env -i DISPLAY=:0.0 /usr/bin/python3 /home/nvidia/main
Example - User root try running something on X - with and without DISPLAY
#kcalc
qt.qpa.screen: QXcbConnection: Could not connect to display
Could not connect to any X display.
#env -i DISPLAY=:0.0 kcalc
QStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/tmp/runtime-root'
You can check this in an XTerm with and/or without DISPLAY variable.
[Unit]
Description = Test
After=multi-user.target
[Service]
Type=simple
Environment="DISPLAY=:0"
Environment="XAUTHORITY=/home/nvidia/.Xauthority"
ExecStart = /usr/bin/python /home/nvidia/main
Restart=on-failure
[Install]
WantedBy =graphical.target
I successfully executed GUI using systemd service using the above directive values

How to connect Kafka consumer to Django app? Should I use a new thread for consumer or new process or new docker container?

I have Django app which should consume Kafka messages and handle it with my handlers and existing models.
I use https://kafka-python.readthedocs.io/en/master/usage.html library.
What is the right way to connect KafkaConsumer to Django app. Should I user a new daemon thread? Or a new process? Or a separate docker container maybe?
Where to place the code (new Django app?) and how to start it automatically when Django app is ready. And how to update topics which it listen dynamically: should I kill old consumer and start new one each time in new thread?
Had a similar problem, what I did was create a custom Django command, then proceed to do add a handler method for your functionality.
In deployment, you can launch it as a sidecar container.
class Command(BaseCommand):
def handle(self, *args, **options):
consumer = kafka.KafkaConsumer(KAFKA_TOPIC_NAME,bootstrap_server=["localhost:9092"],group_id=KAFKA_CONSUMER_GROUP)
for message in consumer:
handler_method(message)
As a sidecar it would pick any messages in the consumer when it starts.
I know it's a late answer.
Regarding this issue you might give a try to Faust, it's a stream processing library and can be integrated with Django.
In Docker, initiate two different containers that share the same code base, except one of these containers runs a command which starts a Faust worker
version: '3.3'
services:
backend_service: &backend_service
container_name: your_container_name
build:
context: .
dockerfile: Dockerfile
image: backend_service_image
volumes:
- ./your_codebase:/your_container_codebase_path
ports:
- "8000:8000"
env_file: .env
kafka_consumer:
<<: *backend_service
image: kafka_consumer_image
container_name: kafka_consumer
command: faust -A <your_project_root>.kafka:app worker -l info
Notice in kafka_consumer container, a command is run: faust -A <your_project_root>.kafka:app worker -l info, where <your_project_root> is the namespace of the folder which holds the settings.py file.
In this folder create a file kafka.py, as following
import os
import faust
from django.conf import settings
# eventlet is used as a bridge to communicate with asyncio
os.environ.setdefault('FAUST_LOOP', 'eventlet')
os.environ.setdefault('DJANGO_SETTINGS_MODULE', '<your_root_namespace>.settings')
app = faust.App('set_a_name_here', broker=f"kafka://{settings.KAFKA_URL}")
# Specify a topic name
new_topic = app.topic("topic_to_process")
# define a method to process the above topic
#app.agent(new_topic)
async def process_topic(stream):
async for event in stream:
... # process event
Bear in mind that this process is executed in async, so you might have issues when working with Django ORM, in order to properly implement a consumer which uses ORM you can use sync_to_async wrapper.
Or use it as a decorator:
# kafka.py
from asgiref.sync import sync_to_async
#sync_to_async
def _do_something(event):
random_model_id = event.get('random_model_id')
random_model = RandomModel.objects.get(id=random_model_id)
#app.agent(new_topic)
async def process_topic(stream):
async for event in stream:
await _do_something(event)
Then you can debug kafka_consumer container to check what is going on. To reflect the changes you need to restart the container.
If you are not using Docker, you need to install a supervisor and configure this command faust -A <your_project_root>.kafka:app worker -l info to run on your supervisord.conf file.

Using Dask from script

Is it possible to run dask from a python script?
In interactive session I can just write
from dask.distributed import Client
client = Client()
as described in all tutorials. If I write these lines however in a script.py file and execute it python script.py, it immediately crashes.
I found another option I found, is to use MPI:
# script.py
from dask_mpi import initialize
initialize()
from dask.distributed import Client
client = Client() # Connect this local process to remote workers
And then run the script with mpirun -n 4 python script.py. This doesn't crash, however if you print the client
print(client)
# <Client: scheduler='tcp://137.250.37.84:35145' processes=0 cores=0>
you see that no cores are used, accordingly scripts run forever without doing anything.
How do I set my scripts up correctly?
If you want to create processes from within a Python script you need to protect that code in an if __name__ == "__main__": block
from dask.distributed import Client
if __name__ == "__main__":
client = Client()
If you want to use dask-mpi then you need to run it with mpirun or mpiexec with a suitable number of processes.

Using gevent-socketio paster integration causes my application to be unresponsive

I am writing a Pyramid application that relies on gevent-socketio and redis. However, I noticed that when I navigate away from the view that establishes the socket.io connection, my application becomes unresponsive. In order to try and isolate the issue, I created another bare-bones application and discovered that using pubsub.listen() was causing the issue:
class TestNamespace(BaseNamespace):
def initialize(self):
self.spawn(self.emitter)
def emitter(self):
client = redis.pubsub()
client.subscribe('anything')
for broadcast in client.listen():
if broadcast['type'] != 'message':
continue
The way I'm starting up my application is as follows:
pserve --reload development.ini
However, I can only get my application to work if use use the serve.py from the examples:
import os.path
from socketio.server import SocketIOServer
from pyramid.paster import get_app
from gevent import monkey; monkey.patch_all()
HERE = os.path.abspath(os.path.dirname(__file__))
if __name__ == '__main__':
app = get_app(os.path.join(HERE, 'development.ini'))
print 'Listening on port http://0.0.0.0:8080 and on port 10843 (flash policy server)'
SocketIOServer(('0.0.0.0', 8080), app,
resource="socket.io", policy_server=True,
policy_listener=('0.0.0.0', 10843)).serve_forever()
Unfortunatey this is rather cumbersome for development as I lose --reload functionality. Ideally I'd like to use the paster integration entry point
Another thing I noticed is that the gevent-sockectio paster integration does not monkey patch gevent, whereas the examples server.py does.
How can I get pserve --reload to work with gevent-socketio?
I've uploaded my test application to github: https://github.com/m-martinez/iotest
Under [server:main] in your ini file.
use = egg:gevent-socketio#paster
transports = websocket, xhr-multipart, xhr-polling
policy_server = True
host = 0.0.0.0
port = 6543
If you get an error make sure you using the latest version of gevent-socketio.
With no success using egg:gevent-socketio#paster, I ended up using gunicorn with watchdog to achieve what I wanted for development:
watchmedo auto-restart \
--pattern "*.py;*.ini" \
--directory ./iotest/ \
--recursive \
-- \
gunicorn --paste ./iotest/development.ini
This is what my [server:main] section looks like:
[server:main]
use = egg:gunicorn#main
worker_class = socketio.sgunicorn.GeventSocketIOWorker
host = 0.0.0.0
port = 8080
debug = True
logconfig = %(here)s/development.ini

Resources