Send Images with Hikari Python - python-3.x

I made a function to send an image on invocation but instead it ends
up sending 'attachment://img.jpg'
here's the function :
#bot.command
#lightbulb.command('img', 'Test command')
#lightbulb.implements(lightbulb.SlashCommand)
async def imgsnd(ctx):
filename = 'img.jpg'
with open(filename, "rb") as fh:
f = hikari.File('/Users/admin/Desktop/Bot/'+filename)
await ctx.respond(f)

Sending attachments as a slash command initial response wasnt supported until 2.0.0.dev106. Consider upgrading and the issue will be solved.
Additionally, even tho this wasn't part of the question, just a pointer. That open is not necessary and, on top of that, blocking. Instead, you can reduce your code to this:
#bot.command
#lightbulb.command('img', 'Test command')
#lightbulb.implements(lightbulb.SlashCommand)
async def imgsnd(ctx):
f = hikari.File('/Users/admin/Desktop/Bot/img.jpg')
await ctx.respond(f)

Related

Python: how to write to stdin of a subprocess and read its output in real time

I have 2 programs.
The first (which could be written in any language, actually and therefore cannot be altered at all) looks like this:
#!/bin/env python3
import random
while True:
s = input() # get input from stdin
i = random.randint(0, len(s)) # process the input
print(f"New output {i}", flush=True) # prints processed input to stdout
It runs forever, read something from stdin, processes it and writes the result to stdout.
I am trying to write a second program in Python using the asyncio library.
It executes the first program as a subprocess and attempt to feed it input via its stdin and retrieve the result from the its stdout.
Here is my code so far:
#!/bin/env python3
import asyncio
import asyncio.subprocess as asp
async def get_output(process, input):
out, err = await process.communicate(input)
print(err) # shows that the program crashes
return out
# other attempt to implement
process.stdin.write(input)
await process.stdin.drain() # flush input buffer
out = await process.stdout.read() # program is stuck here
return out
async def create_process(cmd):
process = await asp.create_subprocess_exec(
cmd, stdin=asp.PIPE, stdout=asp.PIPE, stderr=asp.PIPE)
return process
async def run():
process = await create_process("./test.py")
out = await get_output(process, b"input #1")
print(out) # b'New output 4'
out = await get_output(process, b"input #2")
print(out) # b''
out = await get_output(process, b"input #3")
print(out) # b''
out = await get_output(process, b"input #4")
print(out) # b''
async def main():
await asyncio.gather(run())
asyncio.run(main())
I struggle to implement the get_output function. It takes a bytestring (as needed by the input parameter of the .communicate() method) as parameter, writes it to the stdin of the program, reads the response from its stdout and returns it.
Right now, only the first call to get_output works properly. This is because the implementation of the .communicate() method calls the wait() method, effectively causing the program to terminate (which it isn't meant to). This can be verified by examining the value of err in the get_output function, which shows the first program reached EOF. And thus, the other calls to get_output return an empty bytestring.
I have tried another way, even less successful, since the program gets stuck at the line out = await process.stdout.read(). I haven't figured out why.
My question is how do I implement the get_output function to capture the program's output in (near) real time and keep it running ? It doesn't have to be using asyncio, but I have found this library to be the best one so far for that.
Thank you in advance !
If the first program is guaranteed to print only one line of output in response to the line of input that it has read, you can change await process.stdout.read() to await process.stdout.readline() and your second approach should work.
The reason it didn't work for you is that your run function has a bug: it never sends a newline to the child process. Because of that, the child process is stuck in input() and never responds. If you add \n at the end of the bytes literals you're passing to get_output, the code works correctly.

Is there a workaround for the blocking that happens with Firebase Python SDK? Like adding a completion callback?

Recently, I have moved my REST server code in express.js to using FastAPI. So far, I've been successful in the transition until recently. I've noticed based on the firebase python admin sdk documention, unlike node.js, the python sdk is blocking. The documentation says here:
In Python and Go Admin SDKs, all write methods are blocking. That is, the write methods do not return until the writes are committed to the database.
I think this feature is having a certain effect on my code. It also could be how I've structured my code as well. Some code from one of my files is below:
from app.services.new_service import nService
from firebase_admin import db
import json
import redis
class TryNewService:
async def tryNew_func(self, request):
# I've already initialized everything in another file for firebase
ref = db.reference()
r = redis.Redis()
holdingData = await nService().dialogflow_session(request)
fulfillmentText = json.dumps(holdingData[-1])
body = await request.json()
if ("user_prelimInfo_address" in holdingData):
holdingData.append("session")
holdingData.append(body["session"])
print(holdingData)
return(holdingData)
else:
if (("Default Welcome Intent" in holdingData)):
pass
else:
UserVal = r.hget(name='{}'.format(body["session"]), key="userId").decode("utf-8")
ref.child("users/{}".format(UserVal)).child("c_data").set({holdingData[0]:holdingData[1]})
print(holdingData)
return(fulfillmentText)
Is there any workaround for the blocking effect of usingref.set() line in my code? Kinda like adding a callback in node.js? I'm new to the asyncio world of python 3.
Update as of 06/13/2020: So I added following code and am now getting a RuntimeError: Task attached to a different loop. In my second else statement I do the following:
loop = asyncio.new_event_loop()
UserVal = r.hget(name='{}'.format(body["session"]), key="userId").decode("utf-8")
with concurrent.futures.ThreadPoolExecutor(max_workers=20) as pool:
result = await loop.run_in_executor(pool, ref.child("users/{}".format(UserVal)).child("c_data").set({holdingData[0]:holdingData[1]}))
print("custom thread pool:{}".format(result))
With this new RuntimeError, I would appreciate some help in figuring out.
If you want to run synchronous code inside an async coroutine, then the steps are:
loop = get_event_loop()
Note: Get and not new. Get provides current event_loop, and new_even_loop returns a new one
await loop.run_in_executor(None, sync_method)
First parameter = None -> use default executor instance
Second parameter (sync_method) is the synchronous code to be called.
Remember that resources used by sync_method need to be properly synchronized:
a) either using asyncio.Lock
b) or using asyncio.run_coroutine_threadsafe function(see an example below)
Forget for this case about ThreadPoolExecutor (that provides a way to I/O parallelism, versus concurrency provided by asyncio).
You can try following code:
loop = asyncio.get_event_loop()
UserVal = r.hget(name='{}'.format(body["session"]), key="userId").decode("utf-8")
result = await loop.run_in_executor(None, sync_method, ref, UserVal, holdingData)
print("custom thread pool:{}".format(result))
With a new function:
def sync_method(ref, UserVal, holdingData):
result = ref.child("users/{}".format(UserVal)).child("c_data").set({holdingData[0]:holdingData[1]}))
return result
Please let me know your feedback
Note: previous code it's untested. I have only tested next minimum example (using pytest & pytest-asyncio):
import asyncio
import time
import pytest
#pytest.mark.asyncio
async def test_1():
loop = asyncio.get_event_loop()
delay = 3.0
result = await loop.run_in_executor(None, sync_method, delay)
print(f"Result = {result}")
def sync_method(delay):
time.sleep(delay)
print(f"dddd {delay}")
return "OK"
Answer #jeff-ridgeway comment:
Let's try to change previous answer to clarify how to use run_coroutine_threadsafe, to execute from a sync worker thread a coroutine that gather these shared resources:
Add loop as additional parameter in run_in_executor
Move all shared resources from sync_method to a new async_method, that is executed with run_coroutine_threadsafe
loop = asyncio.get_event_loop()
UserVal = r.hget(name='{}'.format(body["session"]), key="userId").decode("utf-8")
result = await loop.run_in_executor(None, sync_method, ref, UserVal, holdingData, loop)
print("custom thread pool:{}".format(result))
def sync_method(ref, UserVal, holdingData, loop):
coro = async_method(ref, UserVal, holdingData)
future = asyncio.run_coroutine_threadsafe(coro, loop)
future.result()
async def async_method(ref, UserVal, holdingData)
result = ref.child("users/{}".format(UserVal)).child("c_data").set({holdingData[0]:holdingData[1]}))
return result
Note: previous code is untested. And now my tested minimum example updated:
#pytest.mark.asyncio
async def test_1():
loop = asyncio.get_event_loop()
delay = 3.0
result = await loop.run_in_executor(None, sync_method, delay, loop)
print(f"Result = {result}")
def sync_method(delay, loop):
coro = async_method(delay)
future = asyncio.run_coroutine_threadsafe(coro, loop)
return future.result()
async def async_method(delay):
time.sleep(delay)
print(f"dddd {delay}")
return "OK"
I hope this can be helpful
Run blocking database calls on the event loop using a ThreadPoolExecutor. See https://medium.com/#hiranya911/firebase-python-admin-sdk-with-asyncio-d65f39463916

I'm using asyncio but async function is blocking other async functions with await asyncio.sleep(5)

I've include asyncio to asynchronous my code in my library twbotlib (https://github.com/truedl/twbotlib).
I tried the asynced commands some versions ago and all go well, but I don't check about if is really asynced. Then I've tried to create a giveaway command and use await asyncio.sleep(5). I realized that is blocking all my other code...
After many tries to play with the asyncio code I don't reach the result is running without blocking...
(My class Bot in main.py have an attribute that called self.loop and is actually asyncio.get_event_loop)
I don't know if I do all correctly because I'm just after calling the Run function, I call all later operations with await.
I've tried to replace the just await with
await self.loop.create_task(foo).
I tried to do
await self.loop.ensure_future(foo) but nothing...
Too I've tried to split the code to two functions (mainloop and check_data).
First of all in the code is the Run function here I start the loop (just creating task and run_forever):
def run(self, startup_function=None) -> None:
""" Run the bot and start the main while. """
self.loop.create_task(self.mainloop(startup_function))
self.loop.run_forever()
Secondly here the mainloop function (all the await functions are blocking...):
async def mainloop(self, startup_function) -> None:
""" The main loop that reads and process the incoming data. """
if startup_function:
await startup_function()
self.is_running = True
while self.is_running:
data = self.sock.recv(self.buffer).decode('utf-8').split('\n')
await self.check_data(data)
And the last one is the check_data (is splitted mainloop [I've replace the long if's with "condition" for readability], here too the await's is blocking):
async def check_data(self, data: str) -> None:
for line in data:
if confition:
message = self.get_message_object_from_str(line)
if condition:
if condition:
await self.commands[message.command](message, message.args)
else:
await self.commands[message.command](message)
elif hasattr(self.event, 'on_message'):
await self.event.on_message(message)
if self.logs:
print(line)
There is no error message.
The code is blocking and I'm trying to change it to not block the code.
The loop for line in data: is blocking you code.

using Subprocess to avoid long-running task from disconnecting discord.py bot?

I created a bot for my Discord server, that goes to the Reddit API for a given subreddit, and posts the Top 10 results for the Day in the Discord chat, based on the subreddit(s) that you input. It disregards self posts, and really only posts pictures and GIFs. The Discord message command would look something like this: =get funny awww news programming, posting the results for each subreddit as it gets them from the Reddit API (PRAW). THIS WORKS WITH NO PROBLEM. I know that the bot's ability to hit the API and post to discord works.
I added another command =getshuffled which puts all of the results from the subreddits in a large list, and then shuffles them before posting. This works really well with a request of up to ~50 subreddits.
This is what I need help with:
Because it can be such a large list of results, 1000+ results from 100+ subreddits, the bot is crashing on really big requests. Based on what help I got from my question yesterday, I understand what is going wrong. The bot is starting, it is talking to my Discord server, and when I pass it a long request, it stops talking to the server for too long while the Reddit API call is done, and it the Discord connection fails.
So, what I think I need to do, is have a subprocess for the code that goes to the Reddit API and pulls the results, (which I think will let the discord connection stay running), and then pass those results BACK to the bot when it is finished....
Or... this is something that Asyncio can handle on its own...
I'm having a hard time with the subprocess call, as I knew I would.
Basically, I either need help with this subprocess trickery, or need to know if I'm being an idiot and Asyncio can handle all of this for me. I think this is just one of those "I don't know what I don't know" instances.
So to recap: The bot worked fine with smaller amounts of subreddits being shuffled. It goes through the args sent (which are subreddits), grabbing info for each post, and then shuffling before posting the links to discord. The problem is when it is a larger set of subreddits of ~ 50+. In order to get it to work with the larger amount, I need to have the Reddit call NOT block the main discord connection, and that's why I'm trying to make a subprocess.
Python version is 3.6 and Discord.py version is 0.16.12
This bot is hosted and running on PythonAnywhere
Code:
from redditBot_auth import reddit
import discord
import asyncio
from discord.ext.commands import Bot
#from discord.ext import commands
import platform
import subprocess
import ast
client = Bot(description="Pulls posts from Reddit", command_prefix="=", pm_help = False)
#client.event
async def on_ready():
return await client.change_presence(game=discord.Game(name='Getting The Dank Memes'))
def is_number(s):
try:
int(s)
return True
except:
pass
def show_title(s):
try:
if s == 'TITLES':
return True
except:
pass
async def main_loop(*args, shuffled=False):
print(type(args))
q=10
#This takes a integer value argument from the input string.
#It sets the number variable,
#Then deletes the number from the arguments list.
title = False
for item in args:
if is_number(item):
q = item
q = int(q)
if q > 15:
q=15
args = [x for x in args if not is_number(x)]
if show_title(item):
title = True
args = [x for x in args if not show_title(x)]
number_of_posts = q * len(args)
results=[]
TESTING = False #If this is turned to True, the subreddit of each post will be posted. Will use defined list of results
if shuffled == False: #If they don't want it shuffled
for item in args:
#get subreddit results
#post links into Discord as it gets them
#The code for this works
else: #if they do want it shuffled
output = subprocess.run(["python3.6", "get_reddit.py", "*args"])
results = ast.literal_eval(output.decode("ascii"))
# ^^ this is me trying to get the results back from the other process.
. This is my get_reddit.py file:
#THIS CODE WORKS, JUST NEED TO CALL THE FUNCTION AND RETURN RESULTS
#TO THE MAIN_LOOP FUNCTION
from redditBot_auth import reddit
import random
def is_number(s):
try:
int(s)
return True
except:
pass
def show_title(s):
try:
if s == 'TITLES':
return True
except:
pass
async def get_results(*args, shuffled=False):
q=10
#This takes a integer value argument from the input string.
#It sets the number variable,
#Then deletes the number from the arguments list.
title = False
for item in args:
if is_number(item):
q = item
q = int(q)
if q > 15:
q=15
args = [x for x in args if not is_number(x)]
if show_title(item):
title = True
args = [x for x in args if not show_title(x)]
results=[]
TESTING = False #If this is turned to True, the subreddit of each post will be posted. Will use defined list of results.
NoGrabResults = False
#This pulls the data and creates a list of links for the bot to post
if NoGrabResults == False:
for item in args:
try:
#get the posts
#put them in results list
except Exception as e:
#handle error
pass
try:
#print('____SHUFFLED___')
random.shuffle(results)
random.shuffle(results)
random.shuffle(results)
except:
#error stuff
print(results)
#I should be able to read that print statement for the results,
#and then use that in the main bot function to post the results.
.
#client.command()
async def get(*args, brief="say '=get' followed by a list of subreddits", description="To get the 10 Top posts from a subreddit, say '=get' followed by a list of subreddits:\n'=get funny news pubg'\n would get the top 10 posts for today for each subreddit and post to the chat."):
#sr = '+'.join(args)
await main_loop(*args)
#THIS POSTS THE POSTS RANDOMLY
#client.command()
async def getshuffled(*args, brief="say '=getshuffled' followed by a list of subreddits", description="Does the same thing as =get, but grabs ALL of the posts and shuffles them, before posting."):
await main_loop(*args, shuffled=True)
client.run('my ID')
UPDATE: Following advice, I had the command passed through a ThreadPoolExecutor as shown:
async def main(*args, shuffled):
if shuffled==True:
with concurrent.futures.ThreadPoolExecutor() as pool:
results = await asyncio.AbstractEventLoop().run_in_executor(
executor=pool, func=await main_loop(*args, shuffled=True))
print('custom thread pool', results)
but this still results in errors when the script tries to talk to Discord:
ERROR:asyncio:Task was destroyed but it is pending!
task: <Task pending coro=<Client._run_event() running at /home/GageBrk/.local/lib/python3.6/site-packages/discord/client.py:307> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7f28acd8db28>()]>>
Event loop is closed
Destination must be Channel, PrivateChannel, User, or Object. Received NoneType
Destination must be Channel, PrivateChannel, User, or Object. Received NoneType
Destination must be Channel, PrivateChannel, User, or Object. Received NoneType
...
It is sending the results correctly, but discord is still losing connection.
praw relies on the requests library, which is synchronous meaning that the code is blocking. This can cause your bot to freeze if the blocking code takes too long to execute.
To get around this, a separate thread can be created that handles the blocking code. Below is an example of this. Note how blocking_function will use time.sleep to block for 10 minutes (600 seconds). This should be more than enough to freeze and eventually crash the bot. However, since the function is in it's own thread using run_in_executor, the bot continues to operate as normal.
New versions
import time
import asyncio
from discord.ext import commands
from concurrent.futures import ThreadPoolExecutor
def blocking_function():
print('entering blocking function')
time.sleep(600)
print('sleep has been completed')
return 'Pong'
client = commands.Bot(command_prefix='!')
#client.event
async def on_ready():
print('client ready')
#client.command()
async def ping(ctx):
loop = asyncio.get_event_loop()
block_return = await loop.run_in_executor(ThreadPoolExecutor(), blocking_function)
await ctx.send(block_return)
client.run('token')
Older async version
import time
import asyncio
from discord.ext import commands
from concurrent.futures import ThreadPoolExecutor
def blocking_function():
print('entering blocking function')
time.sleep(600)
print('sleep has been completed')
return 'Pong'
client = commands.Bot(command_prefix='!')
#client.event
async def on_ready():
print('client ready')
#client.command()
async def ping():
loop = asyncio.get_event_loop()
block_return = await loop.run_in_executor(ThreadPoolExecutor(), blocking_function)
await client.say(block_return)
client.run('token')

Asynchronously writing to console from stdin and other sources

I try to try to write some kind of renderer for the command line that should be able to print data from stdin and from another data source using asyncio and blessed, which is an improved version of python-blessings.
Here is what I have so far:
import asyncio
from blessed import Terminal
#asyncio.coroutine
def render(term):
while True:
received = yield
if received:
print(term.bold + received + term.normal)
async def ping(renderer):
while True:
renderer.send('ping')
await asyncio.sleep(1)
async def input_reader(term, renderer):
while True:
with term.cbreak():
val = term.inkey()
if val.is_sequence:
renderer.send("got sequence: {0}.".format((str(val), val.name, val.code)))
elif val:
renderer.send("got {0}.".format(val))
async def client():
term = Terminal()
renderer = render(term)
render_task = asyncio.ensure_future(renderer)
pinger = asyncio.ensure_future(ping(renderer))
inputter = asyncio.ensure_future(input_reader(term, renderer))
done, pending = await asyncio.wait(
[pinger, inputter, renderer],
return_when=asyncio.FIRST_COMPLETED,
)
for task in pending:
task.cancel()
if __name__ == '__main__':
asyncio.get_event_loop().run_until_complete(client())
asyncio.get_event_loop().run_forever()
For learning and testing purposes there is just a dump ping that sends 'ping' each second and another routine, that should grab key inputs and also sends them to my renderer.
But ping only appears once in the command line using this code and the input_reader works as expected. When I replace input_reader with a pong similar to ping everything is fine.
This is how it looks when typing 'pong', although if it takes ten seconds to write 'pong':
$ python async_term.py
ping
got p.
got o.
got n.
got g.
It seems like blessed is not built to work correctly with asyncio: inkey() is a blocking method. This will block any other couroutine.
You can use a loop with kbhit() and await asyncio.sleep() to yield control to other coroutines - but this is not a clean asyncio solution.

Resources