How could I auto execute ban command in python-telegram-bot? - python-3.x

I relatively new in that Python field (I have notions of C++) and I would want to create a bot that auto execute ban as soon as it detects russian bots with cyrillic characters in their nicks entering in the group.
I copy-pasted several parts of different examples and I don't know if I'm doing well. Here is the code:
import re
import logging
from telegram import ReplyKeyboardMarkup, ReplyKeyboardRemove, Update
from telegram.ext import (
Updater,
CommandHandler,
MessageHandler,
Filters,
ConversationHandler,
CallbackContext,
)
updater = Updater("<MY-BOT-TOKEN>")
# Get the dispatcher to register handlers
dispatcher = updater.dispatcher
# on different commands - answer in Telegram
dispatcher.add_handler(CommandHandler("start", start))
dispatcher.add_handler(CommandHandler("help", help_command))
# on noncommand i.e message - echo the message on Telegram
dispatcher.add_handler(MessageHandler(Filters.text & ~Filters.command, echo))
def has_cyrillic(text):
return bool(re.search('[а-яА-Я]', text))
def add_group(update: Update, context: CallbackContext):
for member in update.message.new_chat_members:
if has_cyrillic(member.full_name):
--- SUPPOSEDLY, AUTO BAN WOULD BE HERE ---
update.message.reply_text("{member.full_name} has been banned")
else:
update.message.reply_text("{member.full_name} just joined the group")
add_group_handle = MessageHandler(Filters.status_update.new_chat_members, add_group)
# Start the Bot
updater.start_polling()
# Run the bot until you press Ctrl-C or the process receives SIGINT,
# SIGTERM or SIGABRT. This should be used most of the time, since
# start_polling() is non-blocking and will stop the bot gracefully.
updater.idle()
I'm using the newest version of Python and python-telegram-bot.
Thanks in advance!

Related

Using Telegram Bot with Django

I am trying to use my telegram bot with Django. I want the code to keep running in the background. I am Using the apps.py to do this but there's one problem when the bot starts as it's an infinite loop, the Django server is never started.
Apps.py:
from django.apps import AppConfig
import os
class BotConfig(AppConfig):
default_auto_field = 'django.db.models.BigAutoField'
name = 'bot'
def ready(self):
from . import jobs
if os.environ.get('RUN_MAIN', None) != 'true':
jobs.StartBot()
Jobs.py:
def StartBot():
updater = Updater("API KEY")
dp = updater.dispatcher
dp.add_handler(ChatMemberHandler(GetStatus, ChatMemberHandler.CHAT_MEMBER))
updater.start_polling(allowed_updates=Update.ALL_TYPES)
updater.idle()
What's the best way to run my bot in the background? while making sure that the Django server runs normally. I tried Django background tasks but it's not compatible with Django 4.0.
The purpose of Updater.idle is to keep the main thread alive because start_polling only starts some background threads that don't block the main thread. If you want to run other stuff in the main thread, skip updater.idle() and instead call Updater.stop manually when the program should shut down.
Disclaimer: I'm currently the maintainer of python-telegram-bot

Python aiogram bot: send message from another thread

The telegram bot I'm making can execute a function that takes a few minutes to process and I'd like to be able to continue to use the bot while it's processing the function.
I'm using aiogram, asyncio and I tried using Python threading to make this possible.
The code I currently have is:
import asyncio
from queue import Queue
from threading import Thread
import time
import logging
from aiogram import Bot, types
from aiogram.types.message import ContentType
from aiogram.contrib.middlewares.logging import LoggingMiddleware
from aiogram.contrib.fsm_storage.memory import MemoryStorage
from aiogram.dispatcher import Dispatcher, FSMContext
from aiogram.utils.executor import start_webhook
from aiogram.types import InputFile
...
loop = asyncio.get_event_loop()
bot = Bot(token=BOT_TOKEN, loop=loop)
dp = Dispatcher(bot, storage=MemoryStorage())
dp.middleware.setup(LoggingMiddleware())
task_queue = Queue()
...
async def send_result(id):
logging.warning("entered send_result function")
image_res = InputFile(path_or_bytesio="images/result/res.jpg")
await bot.send_photo(id, image_res, FINISHED_MESSAGE)
def queue_processing():
while True:
if not task_queue.empty():
task = task_queue.get()
if task["type"] == "nst":
nst.run(task["style"], task["content"])
send_fut = asyncio.run_coroutine_threadsafe(send_result(task['id']), loop)
send_fut.result()
task_queue.task_done()
time.sleep(2)
if __name__ == "__main__":
executor_images = Thread(target=queue_processing, args=())
executor_images.start()
start_webhook(
dispatcher=dp,
webhook_path=WEBHOOK_PATH,
skip_updates=False,
on_startup=on_startup,
host=WEBAPP_HOST,
port=WEBAPP_PORT,
)
So I'm trying to setup a separate thread that's running a loop that is processing a queue of slow tasks thus allowing to continue chatting with the bot in the meantime and which would send the result message (image) to the chat after it's finished with a task.
However, this doesn't work. My friend came up with this solution while doing a similar task about a year ago, and it does work in his bot, but it doesn't seem to work in mine.
Judging by logs, it never even enters the send_result function, because the warning never comes through. The second thread does work properly though and the result image is saved and is located in its assigned path by the time nst.run finishes working.
I tried A LOT of different things and I'm very puzzled why this solution doesn't work for me because it does work with another bot. For example, I tried using asyncio.create_task instead of asyncio.run_coroutine_threadsafe, but to no avail.
To my understanding, you don't need to pass a loop to aiogram's Bot or Dispatcher anymore, but in that case I don't know how to send a task to the main thread from the second one.
Versions I'm using: aiogram 2.18, asyncio 3.4.3, Python 3.9.10.
Solved, the issue was that you can't access the bot's loop directly (with bot.loop or dp.loop) even if you pass your own asyncio loop to the bot or the dispatcher.
So the solution was to access the main thread's loop by using asyncio.get_event_loop() (which returns currently running loop, if there's one) from within one of the message handlers, because the loop is running at this point, and pass it to asyncio.run_coroutine_threadsafe (I used the "task" dictionary for that) like this: asyncio.run_coroutine_threadsafe(send_result(task['id']), task['loop']).

What is best practice to interact with subprocesses in python

I'm building an apllication which is intended to do a bulk-job processing data within another software. To control the other software automatically I'm using pyautoit, and everything works fine, except for application errors, caused from the external software, which occur from time to time.
To handle those cases, I built a watchdog:
It starts the script with the bulk job within a subprocess
process = subprocess.Popen(['python', job_script, src_path], stdout=subprocess.PIPE,
stderr=subprocess.PIPE, shell=True)
It listens to the system event using winevt.EventLog module
EventLog.Subscribe('System', 'Event/System[Level<=2]', handle_event)
In case of an error occurs, it shuts down everything and re-starts the script again.
Ok, if an system error event occurs, this event should get handled in a way, that the supprocess gets notified. This notification should then lead to the following action within the subprocess:
Within the subprocess there's an object controlling everything and continuously collecting
generated data. In order to not having to start the whole job from the beginnig, after re-starting the script, this object has to be dumped using pickle (which isn't the problem here!)
Listening to the system event from inside the subprocess didn't work. It results in a continuous loop, when calling subprocess.Popen().
So, my question is how I can either subscribe for system events from inside a childproces, or communicate between the parent and childprocess - means, sending a message like "hey, an errorocurred", listening within the subprocess and then creating the dump?
I'm really sorry not being allowed to post any code in this case. But I hope (and actually think), that my description should be understandable. My question is just about what module to use to accomplish this in the best way?
Would be really happy, if somebody could point me into the right direction...
Br,
Mic
I believe the best answer may lie here: https://docs.python.org/3/library/subprocess.html#subprocess.Popen.stdin
These attributes should allow for proper communication between the different processes fairly easily, and without any other dependancies.
Note that Popen.communicate() may suit better if other processes may cause issues.
EDIT to add example scripts:
main.py
from subprocess import *
import sys
def check_output(p):
out = p.stdout.readline()
return out
def send_data(p, data):
p.stdin.write(bytes(f'{data}\r\n', 'utf8')) # auto newline
p.stdin.flush()
def initiate(p):
#p.stdin.write(bytes('init\r\n', 'utf8')) # function to send first communication
#p.stdin.flush()
send_data(p, 'init')
return check_output(p)
def test(p, data):
send_data(p, data)
return check_output(p)
def main()
exe_name = 'Doc2.py'
p = Popen([sys.executable, exe_name], stdout=PIPE, stderr=STDOUT, stdin=PIPE)
print(initiate(p))
print(test(p, 'test'))
print(test(p, 'test2')) # testing responses
print(test(p, 'test3'))
if __name__ == '__main__':
main()
Doc2.py
import sys, time, random
def recv_data():
return sys.stdin.readline()
def send_data(data):
print(data)
while 1:
d = recv_data()
#print(f'd: {d}')
if d.strip() == 'test':
send_data('return')
elif d.strip() == 'init':
send_data('Acknowledge')
else:
send_data('Failed')
This is the best method I could come up with for cross-process communication. Also make sure all requests and responses don't contain newlines, or the code will break.

Python Telegram bot that gets stats from Scrapy

I'd like to write a Telegram bot which can provide Scrapy stats on request.
My try mostly works, the only issue is that forcefully closing the spider (obviously) doesn't stop the bot.
So I have two questions :
is my general approach the correct one?
is it possible to close the bot even on forceful shutdown of the spider?
Here is the relevant class:
class TelegramBot(object):
telegram_token = telegram_credentials.token
#classmethod
def from_crawler(cls, crawler):
return cls(crawler)
def __init__(self, crawler):
self.crawler = crawler
cs = crawler.signals
cs.connect(self._spider_closed, signal=signals.spider_closed)
"""Start the bot."""
# Create the Updater and pass it your bot's token.
# Make sure to set use_context=True to use the new context based callbacks
# Post version 12 this will no longer be necessary
self.updater = Updater(self.telegram_token, use_context=True)
# Get the dispatcher to register handlers
dp = self.updater.dispatcher
# on different commands - answer in Telegram
dp.add_handler(CommandHandler("stats", self.stats))
# Start the Bot
self.updater.start_polling()
def _spider_closed(self, spider, reason):
# Stop the Bot
self.updater.stop()
def stats(self, update, context):
# Send a message with the stats
msg = (
"Spider "
+ self.crawler.spider.name
+ " stats: "
+ str(self.crawler.stats.get_stats())
)
update.message.reply_text(msg)
Here you can find my full code inside the Scrapy tutorial quotes spider https://github.com/jtommi/scrapy_telegram-bot_example/blob/master/tutorial/tutorial/telegram-bot.py
My code is a combination of
latencies extension from the "Learning Scrapy" book https://github.com/scalingexcellence/scrapybook/blob/master/ch09/properties/properties/latencies.py
echobot example from the python-telegram-bot library https://github.com/python-telegram-bot/python-telegram-bot/blob/master/examples/echobot.py
official scrapy documentation on stats collection https://docs.scrapy.org/en/latest/topics/stats.html
Calling the updater.stop() will definitely stop the bot. From the docs of python-telegram-bot,
"""Stops the polling/webhook thread, the dispatcher and the job queue."""
Check if updater.stop() is called after spider closes. Bot might not stop immediately, but eventually will stop.

How to shut down CherryPy in no incoming connections for specified time?

I am using CherryPy to speak to an authentication server. The script runs fine if all the inputted information is fine. But if they make an mistake typing their ID the internal HTTP error screen fires ok, but the server keeps running and nothing else in the script will run until the CherryPy engine is closed so I have to manually kill the script. Is there some code I can put in the index along the lines of
if timer >10 and connections == 0:
close cherrypy (< I have a method for this already)
Im mostly a data mangler, so not used to web servers. Googling shows lost of hits for closing CherryPy when there are too many connections but not when there have been no connections for a specified (short) time. I realise the point of a web server is usually to hang around waiting for connections, so this may be an odd case. All the same, any help welcome.
Interesting use case, you can use the CherryPy plugins infrastrcuture to do something like that, take a look at this ActivityMonitor plugin implementation, it shutdowns the server if is not handling anything and haven't seen any request in a specified amount of time (in this case 10 seconds).
Maybe you have to adjust the logic on how to shut it down or do anything else in the _verify method.
If you want to read a bit more about the publish/subscribe architecture take a look at the CherryPy Docs.
import time
import threading
import cherrypy
from cherrypy.process.plugins import Monitor
class ActivityMonitor(Monitor):
def __init__(self, bus, wait_time, monitor_time=None):
"""
bus: cherrypy.engine
wait_time: Seconds since last request that we consider to be active.
monitor_time: Seconds that we'll wait before verifying the activity.
If is not defined, wait half the `wait_time`.
"""
if monitor_time is None:
# if monitor time is not defined, then verify half
# the wait time since the last request
monitor_time = wait_time / 2
super().__init__(
bus, self._verify, monitor_time, self.__class__.__name__
)
# use a lock to make sure the thread that triggers the before_request
# and after_request does not collide with the monitor method (_verify)
self._active_request_lock = threading.Lock()
self._active_requests = 0
self._wait_time = wait_time
self._last_request_ts = time.time()
def _verify(self):
# verify that we don't have any active requests and
# shutdown the server in case we haven't seen any activity
# since self._last_request_ts + self._wait_time
with self._active_request_lock:
if (not self._active_requests and
self._last_request_ts + self._wait_time < time.time()):
self.bus.exit() # shutdown the engine
def before_request(self):
with self._active_request_lock:
self._active_requests += 1
def after_request(self):
with self._active_request_lock:
self._active_requests -= 1
# update the last time a request was served
self._last_request_ts = time.time()
class Root:
#cherrypy.expose
def index(self):
return "Hello user: current time {:.0f}".format(time.time())
def main():
# here is how to use the plugin:
ActivityMonitor(cherrypy.engine, wait_time=10, monitor_time=5).subscribe()
cherrypy.quickstart(Root())
if __name__ == '__main__':
main()

Resources