Telethon doesn't send messages when running as service - python-3.x

I've created a script which sends messages using telethon. The receivers are not always the same: the number of receivers and their IDs are taken from a MySQL table. The multi processing script runs okay in the expected loop when started from the command prompt. But as soon as it's started as a service the messages are not send.
Please see the code below which includes the function to send out the messages. This function is called by another function which loops over the result of a MySQL query.
Can someone shine a light on the question why the function runs fine from the prompt and not as a service?
import configparser
# get configuration
config = configparser.ConfigParser()
config.read('/etc/p2000.cfg')
telegram_api_id = config.get('telegram','api_id')
telegram_api_hash = config.get('telegram','api_hash')
telegram_bot_name = config.get('telegram','bot_name')
client = TelegramClient(telegram_bot_name, telegram_api_id, telegram_api_hash)
def p2k_send_telegram(PeerID,Message):
async def main():
await client.send_message(int(PeerID), Message)
with client:
client.loop.run_until_complete(main())

Okay, the answer was easy and right in front of me! The issue could be isolated to the client variable. When running as a service under systemd the session (file) has to be defined with its full path!
Something like this:
client = TelegramClient('/full/path/to/my.session', telegram_api_id, telegram_api_hash)

Related

how to keep running a client program in python which uses Twilio

I have deployed a Flask application on an Ubuntu server. In order to make a check on the Flask application, I have used Twilio, such that the data will be sent to the server from the client every 5 minutes. In case something goes wrong, I should be getting a text message on my phone. Right now I am doing this on my local machine but I want to know how can I make it run always? Do I have to run the below client code on the Ubuntu server or how it could be done?
import json
import requests
def localClient():
try:
data = {"inputData": "Bank of America", "dataId": 12345}
response = requests.post("http://12.345.567.890/inputData", json=data).json()
except:
from twilio.rest import Client
account_sid = "XXXXXXXXXXXXXXX"
auth_token = "XXXXXXXXX"
client = Client(account_sid, auth_token)
message = client.messages \
.create(
body='Server is down',
from_='+12345678901',
to='+19876543210' )
while True:
localClient()
time.sleep(300)
Use supervisor in Ubuntu. This will auto restart your code whenever you restart server. You don't need to start every time. This will run forever until you stop manually.
Refer to the following link to setup supervisor :
supervisor

Rabbitmq keep request after stopping rabitmq procces and queue

I make a connection app with rabbitmq, it works fine but when I stop rabbitmq process all of my request get lost, I want even after killing rabitmq service, my requests get saved and after restart rabitmq service, all of my request return to their own places.
Here is my rabitmq.py:
import pika
import SimilarURLs
data = ''
connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
channel = connection.channel()
def rabit_mq_start(Parameter):
channel.queue_declare(queue='req')
a = (take(datas=Parameter.decode()))
channel.basic_publish(exchange='',
routing_key='req',
body=str(a))
print(" [x] Sent {}".format(a))
return a
channel.start_consuming()
def take(datas):
returns = SimilarURLs.start(data=datas)
return returns
In addition, I'm sorry for writing mistakes in my question.
You need to enable publisher confirms (via the confirm_delivery method on your channel object). Then your application must keep track of what messages have been confirmed as published, and what messages have not. You will have to implement this yourself. When RabbitMQ is stopped and started again, your application can re-publish the messages that weren't confirmed.
It would be best to use the asynchronous publisher example as a guide. If you use BlockingConnection you won't get the async notifications when a message is confirmed, defeating their purpose.
If you need further assistance after trying to implement this yourself I suggest following up on the pika-python mailing list.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.

Flask-SocketIO - How to emit an event from a sub-process

I have a Flask app which upon certain rest call is running several modules using a ProcessPoolExecutor.
UPDATED: Added redis as a message queue (using docker, redis as redis's host)
socketio = SocketIO(app, message_queue='redis://redis')
(...)
def emit_event(evt, message):
socketio.emit(evt, message, namespace='/test')
#app.route('/info', methods=['GET'])
def info():
emit_event('update_reports', '')
(...)
if __name__ == "__main__":
socketio.run(host='0.0.0.0', threaded=True)
Now that I added redis, it still works when emitting from the main app.
Here some from the code I'm running the sub-process:
def __init__(self):
self.executor = futures.ProcessPoolExecutor(max_workers=4)
self.socketio = SocketIO(async_mode='eventlet', message_queue='redis://redis')
(...)
future = self.executor.submit(process, params)
future.add_done_callback(functools.partial(self.finished_callback, pid))
Then in that callback I'm calling the emit_event method:
def finished_callback(self, pid, future):
pid.status = Status.DONE.value
pid.finished_at = datetime.datetime.utcnow
pid.save()
self.socketio.emit('update_reports', 'done', namespace='/test')
Getting and sending/ emitting messages from/to the client from my controller works just fine, also if I call /info from curl or postman my client gets the message -but- when trying to emit an event same way from within this subprocess callback, now it shows this error:
This is mostly for notifications, like notifying when a long process has finished and stuff like that.
INFO:socketio:emitting event "update_reports" to all [/test]
ERROR:socketio:Cannot publish to redis... retrying
ERROR:socketio:Cannot publish to redis... giving up
What I'm doing wrong?
Thanks!
There are specific rules that you need to follow in setting up the Flask-SocketIO extension so that external processes can emit, which include the use of a message queue that the main and external processes use to coordinate efforts. See the Emitting from an External Process section of the documentation for instructions.

Thread holding up socket

My app receives jobs to do from a web server through sockets. At the moment when a job is running on the app I can then only send 2 more messages to the app before it won't receive any more.
def handlemsg (self, data):
self.sendmsg (cPickle.dumps('received')) # send web server notification received
data = cPickle.loads(data)
print data
# Terminate a Job
if data[-1] == 'terminate':
self.terminate(data[0])
# Check if app is Available
elif data[-1] == 'prod':
pass
# Run Job
else:
supply = supply_thread(data, self.app)
self.supplies[data['job_name']] = supply
supply.daemon = True
supply.start()
I can send as many prods as I like to the server. But once I send a Job that activates a thread then responses become limited. For some reason it will allow me to send another two prods while the job is running... But after that the print message will not appear it just keeps on working.
Any ideas? Thanks
I was running my data through a datagram socket configuration. I switched to a socketstream and it seemed to resolve it.
http://turing.cs.camosun.bc.ca/COMP173/notes/PySox.html
Was helpful in the resolution.

bottle.py WSGI server stops responding

I'm trying to build a simple API with the bottle.py (Bottle v0.11.4) web framework. To 'daemonize' the app on my server (Ubuntu 10.04.4), I'm running the shell
nohup python test.py &
, where test.py is the following python script:
import sys
import bottle
from bottle import route, run, request, response, abort, hook
#hook('after_request')
def enable_cors():
response.headers['Access-Control-Allow-Origin'] = '*'
#route('/')
def ping():
return 'Up and running!'
if __name__ == '__main__':
run(host=<my_ip>, port=3000)
I'm running into the following issue:
This works initially but the server stops responding after some time (~24hours). Unfortunately, the logs don't contain any revealing error messages.
The only way I have been able to reproduce the issue is when I try to run a second script on my Ubuntu server that creates another server listening to a different port (ie.: exactly the same script as above but port=3001). If I send a request to the newly created server, I also do not get a response and the connection eventually times out.
Any suggestions are greatly appreciated. I'm new to this, so if there's something fundamentally wrong with this approach, any links to reference guides would also be appreciated. Thank you!
Can you make sure the server isn't sleeping.
If it is, try enabling Wake On LAN http://ubuntuforums.org/showthread.php?t=234588

Resources