django and telegram bot - python-3.x

from telegram import InlineKeyboardButton, InlineKeyboardMarkup, Update
from telegram.ext import Updater, CommandHandler, CallbackQueryHandler, CallbackContext
import secrets
def start(update: Update, context: CallbackContext) -> None:
chat_id = update.effective_chat.id
context.bot.send_message(chat_id=chat_id,
text=f"Thank you for using our telegram bot! We will send you notifications here!")
def main():
updater = Updater('53049746:27b1xn8KRQdCdFERPVw7o')
updater.dispatcher.add_handler(CommandHandler('start', start))
# Start the Bot
updater.start_polling( )
# timeout=300
# Run the bot until the user presses Ctrl-C or the process receives SIGINT,
# SIGTERM or SIGABRT
updater.idle()
main()
This is code for my telegram bot, I run it like python3 bot.py and it works:
The question is: I have django project, so I need to run this bot.py in the background what is the best way to do it? (Right now I start my django project as python3 manage.py start server, later will use docker for it)
UPDATE:
I need bot.py to response to commands like /start, /info, /help, etc`
And I need django app to make some urls like mywebsite.com/send_telegram_msg?user_id=123123123123 which will trigger my bot to send msg

Related

Import Error : cannot import name 'AsyncWebhookAdapter' from 'discord' [duplicate]

I tried to import RequestsWebhook Adapter for my python project and it just won't install the RequestsWebhookAdapter
pip Discord.py version: 1.7.3
Python version: 3.10.6
Code:
import requests
import discord
from discord import Webhook, RequestsWebhookAdapter # Importing discord.RequestsWebhookAdapter doesnt work
webhook = Webhook.from_url('https://discord.com/api/webhooks/[my-webhook]', adapter=RequestsWebhookAdapter()) # Initializing webhook
webhook.send(content="Hello World") # Executing webhook.
How do I get requests webhook adapter? I dont want to use async stuff
With the new release of discord package, you have to update your code to:
import requests
from discord import SyncWebhook # Import SyncWebhook
webhook = SyncWebhook.from_url('https://discord.com/api/webhooks/[my-webhook]') # Initializing webhook
webhook.send(content="Hello World") # Executing webhook.
Today was new release of discord.py package. Downgrade it to 1.7.3 and everything will be fine :)
So.. in your requirements.txt should be:
discord==1.7.3
discord.py==1.7.3

Unable to run flask app if starter file is other than app.py - Test-Driven Development with Python, Flask, and Docker [duplicate]

I want to know the correct way to start a flask application. The docs show two different commands:
$ flask -a sample run
and
$ python3.4 sample.py
produce the same result and run the application correctly.
What is the difference between the two and which should be used to run a Flask application?
The flask command is a CLI for interacting with Flask apps. The docs describe how to use CLI commands and add custom commands. The flask run command is the preferred way to start the development server.
Never use this command to deploy publicly, use a production WSGI server such as Gunicorn, uWSGI, Waitress, or mod_wsgi.
As of Flask 2.2, use the --app option to point the command at your app. It can point to an import name or file name. It will automatically detect an app instance or an app factory called create_app. Use the --debug option to run in debug mode with the debugger and reloader.
$ flask --app sample --debug run
Prior to Flask 2.2, the FLASK_APP and FLASK_ENV=development environment variables were used instead. FLASK_APP and FLASK_DEBUG=1 can still be used in place of the CLI options above.
$ export FLASK_APP=sample
$ export FLASK_ENV=development
$ flask run
On Windows CMD, use set instead of export.
> set FLASK_APP=sample
For PowerShell, use $env:.
> $env:FLASK_APP = "sample"
The python sample.py command runs a Python file and sets __name__ == "__main__". If the main block calls app.run(), it will run the development server. If you use an app factory, you could also instantiate an app instance at this point.
if __name__ == "__main__":
app = create_app()
app.run(debug=True)
Both these commands ultimately start the Werkzeug development server, which as the name implies starts a simple HTTP server that should only be used during development. You should prefer using the flask run command over the app.run().
Latest documentation has the following example assuming you want to run hello.py(using .py file extension is optional):
Unix, Linux, macOS, etc.:
$ export FLASK_APP=hello
$ flask run
Windows:
> set FLASK_APP=hello
> flask run
you just need to run this command
python app.py
(app.py is your desire flask file)
but make sure your .py file has the following flask settings(related to port and host)
from flask import Flask, request
from flask_restful import Resource, Api
import sys
import os
app = Flask(__name__)
api = Api(app)
port = 5100
if sys.argv.__len__() > 1:
port = sys.argv[1]
print("Api running on port : {} ".format(port))
class topic_tags(Resource):
def get(self):
return {'hello': 'world world'}
api.add_resource(topic_tags, '/')
if __name__ == '__main__':
app.run(host="0.0.0.0", port=port)
The very simples automatic way without exporting anything is using python app.py see the example here
from flask import (
Flask,
jsonify
)
# Function that create the app
def create_app(test_config=None ):
# create and configure the app
app = Flask(__name__)
# Simple route
#app.route('/')
def hello_world():
return jsonify({
"status": "success",
"message": "Hello World!"
})
return app # do not forget to return the app
APP = create_app()
if __name__ == '__main__':
# APP.run(host='0.0.0.0', port=5000, debug=True)
APP.run(debug=True)
For Linux/Unix/MacOS :-
export FLASK_APP = sample.py
flask run
For Windows :-
python sample.py
OR
set FLASK_APP = sample.py
flask run
You can also run a flask application this way while being explicit about activating the DEBUG mode.
FLASK_APP=app.py FLASK_DEBUG=true flask run

Why does systemd not send unit properties changed notifcations on session bus?

I implemented a dbus systemd listener in Python (3.7) that shall monitor property changes of a systemd unit. On the session dbus it does not receive any notifications. Running on the system dbus the code does what is expected.
Is there a way to also receive unit changed notifications on the session bus?
My system: A Raspberry 4 running latest version of Raspberry PI OS.
This is the service I created.
[Unit]
Description = A dummy service
[Service]
Type = simple
ExecStart = /bin/true
RemainAfterExit=yes
I installed the service to /etc/systemd/system and ~/.config/systemd/user and executed daemon-reload for the system and the user session. After doing, the service is known as user and as system service.
This is the dummy_listener.py code
#!/usr/bin/env python3
# Python version required: >= 3.7 (because of used asyncio API)
"""A simple subscriber/listener for systemd unit signals"""
import sys
import asyncio
from dbus_next.aio import MessageBus
from dbus_next import BusType
class DbusDummyService(): # pylint: disable=no-self-use
"""Asyncio based dummy.service listener"""
async def init(self, bus_type=BusType.SESSION):
"""Register listener callback with dbus bus_type"""
bus = await MessageBus(bus_type=bus_type).connect()
# Get introspection XML
introspection = await bus.introspect('org.freedesktop.systemd1',
'/org/freedesktop/systemd1/unit/dummy_2eservice')
# Select systemd service object
obj = bus.get_proxy_object('org.freedesktop.systemd1',
'/org/freedesktop/systemd1/unit/dummy_2eservice', introspection)
# Get required interfaces
properties_if = obj.get_interface('org.freedesktop.DBus.Properties')
# Monitor service status changes
properties_if.on_properties_changed(self.on_properties_changed_cb)
def on_properties_changed_cb(self, interface_name, changed_props, invalidated_props):
"""Callback expected to be called on unit property changes"""
print(f"Callback invoked for interface {interface_name}:")
print(f" Properties updated")
for prop, val in changed_props.items():
print(f" {prop} set to {val.value}")
print(f" Properties invalidated")
for prop in invalidated_props:
print(f" {prop} invalidated")
async def main(bus_type):
"""Asyncio main"""
# Initialize dbus listener
await DbusDummyService().init(bus_type)
# Run loop forever (waiting for dbus signals)
await asyncio.get_running_loop().create_future()
if __name__ == "__main__":
try:
BUS_TYPE = BusType.SYSTEM if 'sys' in sys.argv[1] else BusType.SESSION
except BaseException:
BUS_TYPE = BusType.SESSION
asyncio.run(main(BUS_TYPE))
The listener is ran like this on system dbus
sudo python3 dummy_lister.py sys
For session bus it is ran with
python3 dummy_lister.py
In a separate window I now restart the dummy services and expect the listener to output the prints.
For the session dbus:
systemctl --user restart dummy
For the system dbus:
sudo systemctl restart dummy
On the session dbus the listener just prints nothing. On the system dbus, I receive a bunch of messages.
Any ideas?
systemd doesn't send PropertiesChanged signals unless at least one client is subscribed to it. You need to call the Subscribe() method from the org.freedesktop.systemd1.Manager interface on the /org/freedesktop/systemd1 object.

Gunicorn python Klein

Use Klein for the server and want to run it via Gunicorn.
from klein import Klein
app = Klein()
#app.route('/')
def hello(request):
return "Hello, world!"
resource = app.resource
Working good in twistd -n web --class=twistdPlugin.resource but need add threads, how to do it?
Or how to start this with Gunicorn, now it's return me Application object must be callable.

Using Dask from script

Is it possible to run dask from a python script?
In interactive session I can just write
from dask.distributed import Client
client = Client()
as described in all tutorials. If I write these lines however in a script.py file and execute it python script.py, it immediately crashes.
I found another option I found, is to use MPI:
# script.py
from dask_mpi import initialize
initialize()
from dask.distributed import Client
client = Client() # Connect this local process to remote workers
And then run the script with mpirun -n 4 python script.py. This doesn't crash, however if you print the client
print(client)
# <Client: scheduler='tcp://137.250.37.84:35145' processes=0 cores=0>
you see that no cores are used, accordingly scripts run forever without doing anything.
How do I set my scripts up correctly?
If you want to create processes from within a Python script you need to protect that code in an if __name__ == "__main__": block
from dask.distributed import Client
if __name__ == "__main__":
client = Client()
If you want to use dask-mpi then you need to run it with mpirun or mpiexec with a suitable number of processes.

Resources