How to read notifications using D-Bus and Python 3 - python-3.x

I was developing simple application, which reads notifications from D-Bus and does some stuff upon receiving it.
This turned out to be quite a headache so I am sharing my code with you all.

import gi.repository.GLib
import dbus
from dbus.mainloop.glib import DBusGMainLoop
def notifications(bus, message):
# do your magic
DBusGMainLoop(set_as_default=True)
bus = dbus.SessionBus()
bus.add_match_string_non_blocking("eavesdrop=true, interface='org.freedesktop.Notifications', member='Notify'")
bus.add_message_filter(notifications)
mainloop = gi.repository.GLib.MainLoop()
mainloop.run()

Related

python zmq Server replying with multithreading?

I am working on a Python script that Will work as a server on my Linux machine,
I am using the ZMQ library :
Here's what I want to achieve:
Server receives the payload
Server starts another thread and passes the socket and the payload
While the second thread is handling the data, the first must be already listening for another request
When the second thread finishes, it must send back the reply.
I tried that but I get zmq.error.ZMQError: Operation cannot be accomplished in current state
After a little research I found out that the ZMQ is not thread safe, it means that you cannot share the socket between the threads.
I tried the script without multithreading and it's working perfectly.
So How can I do that ?
Code:
#!/bin/python
import json
import threading
import time
import zmq
import os
import functions # <-- This my script functions.py
if not __name__ == "__main__":exit()
os.chdir("/home/youssef/python")
context = zmq.Context()
socket = context.socket(zmq.REP)
socket.bind("tcp://*:5555")
def run(data):
global socket
data = json.loads(data)
func = functions.match_action(data["action"])
del data["action"]
mess = func(data)
socket.send(mess.encode("utf-8")) # <-- here's the reply
while True:
# Wait for next request from client
data = socket.recv().decode("utf-8")
threading.Thread(target = run, args=(data,)).start()

Using Telegram Bot with Django

I am trying to use my telegram bot with Django. I want the code to keep running in the background. I am Using the apps.py to do this but there's one problem when the bot starts as it's an infinite loop, the Django server is never started.
Apps.py:
from django.apps import AppConfig
import os
class BotConfig(AppConfig):
default_auto_field = 'django.db.models.BigAutoField'
name = 'bot'
def ready(self):
from . import jobs
if os.environ.get('RUN_MAIN', None) != 'true':
jobs.StartBot()
Jobs.py:
def StartBot():
updater = Updater("API KEY")
dp = updater.dispatcher
dp.add_handler(ChatMemberHandler(GetStatus, ChatMemberHandler.CHAT_MEMBER))
updater.start_polling(allowed_updates=Update.ALL_TYPES)
updater.idle()
What's the best way to run my bot in the background? while making sure that the Django server runs normally. I tried Django background tasks but it's not compatible with Django 4.0.
The purpose of Updater.idle is to keep the main thread alive because start_polling only starts some background threads that don't block the main thread. If you want to run other stuff in the main thread, skip updater.idle() and instead call Updater.stop manually when the program should shut down.
Disclaimer: I'm currently the maintainer of python-telegram-bot

Python aiogram bot: send message from another thread

The telegram bot I'm making can execute a function that takes a few minutes to process and I'd like to be able to continue to use the bot while it's processing the function.
I'm using aiogram, asyncio and I tried using Python threading to make this possible.
The code I currently have is:
import asyncio
from queue import Queue
from threading import Thread
import time
import logging
from aiogram import Bot, types
from aiogram.types.message import ContentType
from aiogram.contrib.middlewares.logging import LoggingMiddleware
from aiogram.contrib.fsm_storage.memory import MemoryStorage
from aiogram.dispatcher import Dispatcher, FSMContext
from aiogram.utils.executor import start_webhook
from aiogram.types import InputFile
...
loop = asyncio.get_event_loop()
bot = Bot(token=BOT_TOKEN, loop=loop)
dp = Dispatcher(bot, storage=MemoryStorage())
dp.middleware.setup(LoggingMiddleware())
task_queue = Queue()
...
async def send_result(id):
logging.warning("entered send_result function")
image_res = InputFile(path_or_bytesio="images/result/res.jpg")
await bot.send_photo(id, image_res, FINISHED_MESSAGE)
def queue_processing():
while True:
if not task_queue.empty():
task = task_queue.get()
if task["type"] == "nst":
nst.run(task["style"], task["content"])
send_fut = asyncio.run_coroutine_threadsafe(send_result(task['id']), loop)
send_fut.result()
task_queue.task_done()
time.sleep(2)
if __name__ == "__main__":
executor_images = Thread(target=queue_processing, args=())
executor_images.start()
start_webhook(
dispatcher=dp,
webhook_path=WEBHOOK_PATH,
skip_updates=False,
on_startup=on_startup,
host=WEBAPP_HOST,
port=WEBAPP_PORT,
)
So I'm trying to setup a separate thread that's running a loop that is processing a queue of slow tasks thus allowing to continue chatting with the bot in the meantime and which would send the result message (image) to the chat after it's finished with a task.
However, this doesn't work. My friend came up with this solution while doing a similar task about a year ago, and it does work in his bot, but it doesn't seem to work in mine.
Judging by logs, it never even enters the send_result function, because the warning never comes through. The second thread does work properly though and the result image is saved and is located in its assigned path by the time nst.run finishes working.
I tried A LOT of different things and I'm very puzzled why this solution doesn't work for me because it does work with another bot. For example, I tried using asyncio.create_task instead of asyncio.run_coroutine_threadsafe, but to no avail.
To my understanding, you don't need to pass a loop to aiogram's Bot or Dispatcher anymore, but in that case I don't know how to send a task to the main thread from the second one.
Versions I'm using: aiogram 2.18, asyncio 3.4.3, Python 3.9.10.
Solved, the issue was that you can't access the bot's loop directly (with bot.loop or dp.loop) even if you pass your own asyncio loop to the bot or the dispatcher.
So the solution was to access the main thread's loop by using asyncio.get_event_loop() (which returns currently running loop, if there's one) from within one of the message handlers, because the loop is running at this point, and pass it to asyncio.run_coroutine_threadsafe (I used the "task" dictionary for that) like this: asyncio.run_coroutine_threadsafe(send_result(task['id']), task['loop']).

Is there a way to set a global variable to be used with aiortc?

I'm trying to have a python RTC client use a global variable so that I can reuse it for multiple functions.
I'm using this for a RTC project I¨ve been working on, I have a functioning js Client, but the functions work differently from python.
The functions on the server and js client side are my own, and do not have have parameters, and I hope to avoid having to use them on the python client I'm making.
I've been using the aiortc Cli.py from their github as a basis for how my python clien should work. But I don't run it asynchronous, because I am trying to learn and control when events happen.
the source code can be found here, I am referring to the codes in line 71-72
https://github.com/aiortc/aiortc/blob/master/examples/datachannel-cli/cli.py
this is the code I'm trying to run properly
I've only inserted the code relevant to my current issue
import argparse
import asyncio
import logging
import time
from aiortc import RTCIceCandidate, RTCPeerConnection, RTCSessionDescription
from aiortc.contrib.signaling import add_signaling_arguments, create_signaling
pc = None
channel = None
def createRTCPeer():
print("starting RTC Peer")
pc = RTCPeerConnection()
print("created Peer", pc)
return pc
def pythonCreateDataChannel():
print("creating datachannel")
channel = pc.CreateDataChannel("chat")
the createRTCPeer function works as intended, with it creating an RTC Object, but my pythonCreateDataChannel reports an error, if I have it set to "None" before using it
AttributeError: 'NoneType' object has no attribute 'CreateDataChannel'
and it will report
NameError: name 'channel' is not defined
same goes for pc if I don't set it in the global scope before hand
Have you tried this:
import argparse
import asyncio
import logging
import time
from aiortc import RTCIceCandidate, RTCPeerConnection, RTCSessionDescription
from aiortc.contrib.signaling import add_signaling_arguments, create_signaling
pc = None
channel = None
def createRTCPeer():
print("starting RTC Peer")
global pc
pc = RTCPeerConnection()
print("created Peer", pc)
def pythonCreateDataChannel():
print("creating datachannel")
global channel
channel = pc.CreateDataChannel("chat")

Using stem to control Tor in python33

I am trying to use stem to have a small script run through Tor. I can't seem to get stem to work. Here is my code:
import urllib.request
import re
from stem.connection import connect_port
from stem import Signal
from stem.control import Controller
controller = connect_port(port=9151)
def change():
controller.authenticate()
controller.signal(Signal.NEWNYM)
def getIp():
print (urllib.request.urlopen("http://my-ip.heroku.com").read(30).decode('utf-8'))
def connectTor():
controller = connect_port(port=9151)
controller.connect()
getIp()
if not controller:
sys.exit(1)
print("nope")
def disconnect():
controller.close()
if __name__ == '__main__':
connectTor()
getIP()
change()
getIp()
disconnect()
Basically, all of the IPs that display are the same, when in theory, they should all be different. What can I do to make this code work?
To use Tor you need to direct traffic through its SocksPort (Tor acts as a local socks proxy). In your code above you don't have anything attempting to make urllib go through Tor.
For examples see Stem's client usage tutorials. I'm not sure offhand if SocksiPy or PycURL have Python 3.x counterparts. If not then you'll need to find an alternative.

Resources