I am trying to write an API client for Telegram using Telethon.
https://github.com/LonamiWebs/Telethon
If you create a TelegramClient(session) it prompts for input upon initialization if your session isn’t authorized.
This is great when manually running the program from the terminal, but what if I want to run it inside a daemon or cron job?
They are using the default Input method from python3 to gather the input. I don’t see any way in the library to specify a session file and check if it’s logged in that can be run before initializing a TelegramClient, and it’s the initializer that will prompt for input if not logged in.
This feels like a catch 22! Does anybody know if this might produce an error that could be caught? Or what happens when input() is run with no tty? Would it just hang? Could I add a timeout in that case?
Thanks in advance for helping me understand better!
You are affirming that the initialization of TelegramClient invokes the input function as default, but this is done inside the TelegramClient.start method (docs).
Taking the solution that you give at the end of your question is a fair aproach, so let's use a timeout when invoking input.
from asyncio import get_event_loop, wait_for, TimeoutError
from functools import partial
from telethon import TelegramClient
async def ainput(prompt):
"""Reads input from stdin in an async way"""
loop = get_event_loop()
await loop.run_in_executor(None, print, prompt)
return await loop.run_in_executor(None, input)
async def get_code(timeout):
"""Waits for the code from stdin with a timeout"""
try:
return await wait_for(
ainput("Please, type the code you received: "),
timeout=timeout
)
except TimeoutError:
pass
client = TelegramClient(session, api_id, api_hash).start(
phone=phone,
code_callback=partial(get_code, 30)
)
You should keep in mind that when you call start the arguments phone, and password also reads from stdin if it isn't provided a callable or default value, so you can handle them like in this example with code_callback.
In your case you can get the code from a POST to your API or in other way, just get creative and write the callable that fits your needs.
Related
The telegram bot I'm making can execute a function that takes a few minutes to process and I'd like to be able to continue to use the bot while it's processing the function.
I'm using aiogram, asyncio and I tried using Python threading to make this possible.
The code I currently have is:
import asyncio
from queue import Queue
from threading import Thread
import time
import logging
from aiogram import Bot, types
from aiogram.types.message import ContentType
from aiogram.contrib.middlewares.logging import LoggingMiddleware
from aiogram.contrib.fsm_storage.memory import MemoryStorage
from aiogram.dispatcher import Dispatcher, FSMContext
from aiogram.utils.executor import start_webhook
from aiogram.types import InputFile
...
loop = asyncio.get_event_loop()
bot = Bot(token=BOT_TOKEN, loop=loop)
dp = Dispatcher(bot, storage=MemoryStorage())
dp.middleware.setup(LoggingMiddleware())
task_queue = Queue()
...
async def send_result(id):
logging.warning("entered send_result function")
image_res = InputFile(path_or_bytesio="images/result/res.jpg")
await bot.send_photo(id, image_res, FINISHED_MESSAGE)
def queue_processing():
while True:
if not task_queue.empty():
task = task_queue.get()
if task["type"] == "nst":
nst.run(task["style"], task["content"])
send_fut = asyncio.run_coroutine_threadsafe(send_result(task['id']), loop)
send_fut.result()
task_queue.task_done()
time.sleep(2)
if __name__ == "__main__":
executor_images = Thread(target=queue_processing, args=())
executor_images.start()
start_webhook(
dispatcher=dp,
webhook_path=WEBHOOK_PATH,
skip_updates=False,
on_startup=on_startup,
host=WEBAPP_HOST,
port=WEBAPP_PORT,
)
So I'm trying to setup a separate thread that's running a loop that is processing a queue of slow tasks thus allowing to continue chatting with the bot in the meantime and which would send the result message (image) to the chat after it's finished with a task.
However, this doesn't work. My friend came up with this solution while doing a similar task about a year ago, and it does work in his bot, but it doesn't seem to work in mine.
Judging by logs, it never even enters the send_result function, because the warning never comes through. The second thread does work properly though and the result image is saved and is located in its assigned path by the time nst.run finishes working.
I tried A LOT of different things and I'm very puzzled why this solution doesn't work for me because it does work with another bot. For example, I tried using asyncio.create_task instead of asyncio.run_coroutine_threadsafe, but to no avail.
To my understanding, you don't need to pass a loop to aiogram's Bot or Dispatcher anymore, but in that case I don't know how to send a task to the main thread from the second one.
Versions I'm using: aiogram 2.18, asyncio 3.4.3, Python 3.9.10.
Solved, the issue was that you can't access the bot's loop directly (with bot.loop or dp.loop) even if you pass your own asyncio loop to the bot or the dispatcher.
So the solution was to access the main thread's loop by using asyncio.get_event_loop() (which returns currently running loop, if there's one) from within one of the message handlers, because the loop is running at this point, and pass it to asyncio.run_coroutine_threadsafe (I used the "task" dictionary for that) like this: asyncio.run_coroutine_threadsafe(send_result(task['id']), task['loop']).
I'm building an apllication which is intended to do a bulk-job processing data within another software. To control the other software automatically I'm using pyautoit, and everything works fine, except for application errors, caused from the external software, which occur from time to time.
To handle those cases, I built a watchdog:
It starts the script with the bulk job within a subprocess
process = subprocess.Popen(['python', job_script, src_path], stdout=subprocess.PIPE,
stderr=subprocess.PIPE, shell=True)
It listens to the system event using winevt.EventLog module
EventLog.Subscribe('System', 'Event/System[Level<=2]', handle_event)
In case of an error occurs, it shuts down everything and re-starts the script again.
Ok, if an system error event occurs, this event should get handled in a way, that the supprocess gets notified. This notification should then lead to the following action within the subprocess:
Within the subprocess there's an object controlling everything and continuously collecting
generated data. In order to not having to start the whole job from the beginnig, after re-starting the script, this object has to be dumped using pickle (which isn't the problem here!)
Listening to the system event from inside the subprocess didn't work. It results in a continuous loop, when calling subprocess.Popen().
So, my question is how I can either subscribe for system events from inside a childproces, or communicate between the parent and childprocess - means, sending a message like "hey, an errorocurred", listening within the subprocess and then creating the dump?
I'm really sorry not being allowed to post any code in this case. But I hope (and actually think), that my description should be understandable. My question is just about what module to use to accomplish this in the best way?
Would be really happy, if somebody could point me into the right direction...
Br,
Mic
I believe the best answer may lie here: https://docs.python.org/3/library/subprocess.html#subprocess.Popen.stdin
These attributes should allow for proper communication between the different processes fairly easily, and without any other dependancies.
Note that Popen.communicate() may suit better if other processes may cause issues.
EDIT to add example scripts:
main.py
from subprocess import *
import sys
def check_output(p):
out = p.stdout.readline()
return out
def send_data(p, data):
p.stdin.write(bytes(f'{data}\r\n', 'utf8')) # auto newline
p.stdin.flush()
def initiate(p):
#p.stdin.write(bytes('init\r\n', 'utf8')) # function to send first communication
#p.stdin.flush()
send_data(p, 'init')
return check_output(p)
def test(p, data):
send_data(p, data)
return check_output(p)
def main()
exe_name = 'Doc2.py'
p = Popen([sys.executable, exe_name], stdout=PIPE, stderr=STDOUT, stdin=PIPE)
print(initiate(p))
print(test(p, 'test'))
print(test(p, 'test2')) # testing responses
print(test(p, 'test3'))
if __name__ == '__main__':
main()
Doc2.py
import sys, time, random
def recv_data():
return sys.stdin.readline()
def send_data(data):
print(data)
while 1:
d = recv_data()
#print(f'd: {d}')
if d.strip() == 'test':
send_data('return')
elif d.strip() == 'init':
send_data('Acknowledge')
else:
send_data('Failed')
This is the best method I could come up with for cross-process communication. Also make sure all requests and responses don't contain newlines, or the code will break.
I'm trying to find a way to program my bot to clear a specific amount of messages in a channel. However, I do not know how to get my bot to run it's code based on the user's input data. For example, let's say I'm a user who wants to clear a specific amount of messages, like let's say 15 messages. I want my bot to then clear 15 messages exactly, no more, no less. How do I do that?
if message.content == "{clear":
await message.channel.send("Okay")
await message.channel.send("How many messages should I clear my dear sir?")
This is legit all I got lmao. I'm sorry that I'm such a disappointment to this community ;(
Using a on_message event, you'd have to use the startswith mehtod and create a amount variable which takes your message content without {clear as a value:
if message.content.startswith("{clear"):
amount = int(message.content[7:])
await message.channel.purge(limit=amount+1)
However, I don't recommend using on_message events to create commands. You could use the commands framework of discord.py. It will be much easier for you to create commands.
A quick example:
from discord.ext import commands
bot = commands.Bot(command_prefix='{')
#bot.event
async def on_ready():
print("Bot's ready to go")
#bot.command(name="clear")
async def clear(ctx, amount: int):
await ctx.channel.purge(limit=amount+1)
bot.run("Your token")
ctx will allow you to access the message author, channel, guild, ... and will allow you to call methods such as send, ...
In the python asyncio websockets library, the example calls run_forever(). Why is this required?
Shouldn't run_until_complete() block and run the websockets loop?
#!/usr/bin/env python
# WS server example
import asyncio
import websockets
async def hello(websocket, path):
name = await websocket.recv()
print(f"< {name}")
greeting = f"Hello {name}!"
await websocket.send(greeting)
print(f"> {greeting}")
start_server = websockets.serve(hello, "localhost", 8765)
asyncio.get_event_loop().run_until_complete(start_server)
asyncio.get_event_loop().run_forever()
# if you comment out the above line, this doesn't work, i.e., the server
# doesn't actually block waiting for data...
If I comment out run_forever(), the program ends immediately.
start_server is an awaitable returned by the library. Why isn't run_until_complete sufficient to cause it to block/await on hello()?
websockets.serve simply starts the server and exits immediately. (It still needs to be awaited because configuring the server can require network communication.) Because of that, you need to actually run the event loop.
Since the server is designed to run indefinitely, you cannot run the event loop in the usual way, by passing a coroutine to run_until_complete. As the server has already started, there is no coroutine to run, you just need to let the event loop run and do its job. This is where run_forever comes in useful - it tells the event loop to run (executing the tasks previously scheduled, such as those belonging to the server) indefinitely, or until told to stop by a call to loop.stop.
In Python 3.7 and later one should use asyncio.run to run asyncio code, which will create a new event loop, so the above trick won't work. A good way to accomplish the above in modern asyncio code would be to use the serve_forever method (untested):
async def my_server():
ws_server = await websockets.serve(hello, "localhost", 8765)
await ws_server.server.serve_forever()
asyncio.run(my_server())
I am currently writing a small flask-based micro-service which launches other python scripts via calls to a CLI using python's subprocess module. My ultimate goal is make a non-blocking async function call triggered by http requests to a route in the service and have the service return 200 response from the route while the async function runs in the background.
I have been perusing the docs (I am using Python 3.6.3 for this service) cannot work out how to achieve this. Here is a small example of how my code is structured:
#app.route('/execute_job')
def execute_job():
params = ...
run_async_job(params)
return 'Launched async job according to params, it is now running.'
async def run_async_job(params):
command = 'run_python_cli_scripts args'
proc = subprocess.Popen(command)
# change some envs, do some file io, yada yada yada
...
while True:
if proc.poll() is not None: # the cli script is finished
return notify_external_api_job_complete()
I know that simply calling run_async_job(params) does not actually begin its execution, but instead returns an awaitable or Task which must been thrown in an event_loop. My issue is that I cannot figure out how to run this task in an event_loop such that the return in execute_ job is reached before it completes. Is this sort of thing possible? This is my first foray into async python, and I am looking for behaviour similar to what you would see in async javascript. Is trying to use async def for the function I want to be non-blocking the wrong approach or is there a way to launch the tasks in an event_loop in a non-blocking fashion so that the aforementioned return 'Launched async job according to params, it is now running.' can be reached and the function completed before run_async_job(params) completes?
Thanks in advance for your time and wisdom.
Fwiw to posterity: I opted for using a child process launched via the subprocess module. This was achievable by converting the library file I imported my async def'd function from into a script which uses command line arguments parsed from the argparse module. My route now looks like
#app.route('/execute_job')
def execute_job():
params = ...
command = ('python', params)
subprocess.Popen(command)
return 'Launched async job according to params, it is now running.'
edit: formatting