Why is this queue not working properly? - python-3.x

The following queue is not working properly somehow. Is there any obvious mistake I have made? Basically every incoming SMS message is put onto the queue, tries to send it and if it successful deletes from the queue. If its unsuccessful it sleeps for 2 seconds and tries sending it again.
# initialize queue
queue = queue.Queue()
def messagePump():
while True:
item = queue.get()
if item is not None:
status = sendText(item)
if status == 'SUCCEEDED':
queue.task_done()
else:
time.sleep(2)
def sendText(item):
response = getClient().send_message(item)
response = response['messages'][0]
if response['status'] == '0':
return 'SUCCEEDED'
else:
return 'FAILED'
#app.route('/webhooks/inbound-sms', methods=['POST'])
def delivery_receipt():
data = dict(request.form) or dict(request.args)
senderNumber = data['msisdn'][0]
incomingMessage = data['text'][0]
# came from customer service operator
if (senderNumber == customerServiceNumber):
try:
split = incomingMessage.split(';')
# get recipient phone number
recipient = split[0]
# get message content
message = split[1]
# check if target number is 10 digit long and there is a message
if (len(message) > 0):
# for confirmation send beginning string only
successText = 'Message successfully sent to: '+recipient+' with text: '+message[:7]
queue.put({'from': virtualNumber, 'to': recipient, 'text': message})
The above is running on a Flask server. So invoking messagePump:
thread = threading.Thread(target=messagePump)
thread.start()

The common in such cases is that Thread has completed execution before item started to be presented in the queue, please call thread.daemon = True before running thread.start().
Another thing which may happen here is that Thread was terminated due to exception. Make sure the messagePump handle all possible exceptions.
That topic regarding tracing exceptions on threads may be useful for you:
Catch a thread's exception in the caller thread in Python

Related

QLDB Python Driver Error Handling with Lambda and SQS

We have a QLDB ingestion process that consists of a Lambda function triggered by SQS.
We want to make sure our pipeline is airtight so if a failure or error occurs during driver execution, we don't lose that data if the data fails to commit to QLDB.
In our testing we noticed that if there's a failure within the Lambda itself, it automatically resends the message to the queue, but if the driver fails, the data is lost.
I understand that the default behavior for the driver is to retry four times after the initial failure. My question is, if I wrap qldb_driver.execute_lambda() in a try statement, will that allow the driver to retry upon failure or will it instantly return as a failure and be handled by the except statement?
Here is how I've written the first half of the function:
import json
import boto3
import datetime
from pyqldb.driver.qldb_driver import QldbDriver
from utils import upsert, resend_to_sqs, delete_from_sqs
queue_url = 'https://sqs.XXX/'
sqs = boto3.client('sqs', region_name='us-east-1')
ledger = 'XXXXX'
table = 'XXXXX'
qldb_driver = QldbDriver(ledger_name = ledger, region_name='us-east-1')
def lambda_handler(event, context):
# Simple iterable to identify messages
i = 0
# Error flag
error = False
# Empty list to store message send status as well as body or receipt_handle
batch_messages = []
for record in event['Records']:
payload = json.loads(record["body"])
payload['update_ts'] = str(datetime.datetime.now())
try:
qldb_driver.execute_lambda(lambda executor: upsert(executor, ledger = ledger, table_name = table, data = payload))
# If the message sends successfully, give it status 200 and add the recipt_handle to our list
# so in case an error occurs later, we can delete this message from the queue.
message_info = {f'message_{i}': 200, 'receiptHandle': record['receiptHandle']}
batch_messages.append(message_info)
except Exception as e:
print(e)
# Flip error flag to True
error = True
# If the commit fails, set status 400 and add the message's body to our list.
# This will allow us to send the message back to the queue during error handling.
message_info = {f'message_{i}': 400, 'body': record['body']}
batch_messages.append(message_info)
i += 1
Assuming that this try/except allows the driver to retry upon failure, I've written an additional process to record message data from our batch to delete successful commits and send failures back to the queue:
# Begin error handling
if error:
count = 0
for j in range(len(batch_messages)):
# If a message was sent successfully delete it from the queue
if batch_messages[j][f'message_{j}'] == 200:
receipt_handle = batch_messages[j]['receiptHandle']
delete_from_sqs(sqs, queue_url, receipt_handle)
# If the message failed to commit to QLDB, send it back to the queue
else:
body = batch_messages[j]['body']
resend_to_sqs(sqs, queue_url, body)
count += 1
print(f"ERROR(S) DETECTED - {count} MESSAGES RETURNED TO QUEUE")
else:
print("BATCH PROCESSING SUCCESSFUL")
Thank you for your insight!
The qldb python driver can be configured for more or less retries if you need. I'm not sure if you wanted it to only try 1 time, or if you were asking that the driver will try the transaction 4 times before triggering the try/catch exception. The driver will still try up-to 4 times, before throwing the except.
You can follow the example here to modify the retry amount. Also, note the default retry timeout is a random ms jitter and not exponential. With QLDB, you shouldn't need to wait long periods to retry since it uses optimistic concurrency control.
Also, with your design of throwing the failed message back into the queue...you might want to consider throwing it into a dead letter queue. Dead-letter queues would prevent trouble messages from retrying indefinitely, unless thats your goal.
(edit/additionally)
Observe that the qldb driver exhausting retires before raising an exception.

Input block the subscriber/client of receiving message in Pub/Sub system

I am trying to build a publisher/subscriber system. In simple words, a publisher can publish messages to all subscribers via a broker. Subscribers can support 2 functionalities: 1) receive a message from the broker 2) send a message to the broker with input method. But there is a problem: Subscribers block while waiting the input from stdin and they receive the publishers messages after input, even though no input is needed.
Ι wοuld like to solve the problem in this direction: while waiting for input from stdin, a subscriber can receive a message from publish. I tried "curses" but i failed.
I post a part of my code
subscriber.py
while True:
try:
command = input("Enter a command: ")
#command = sys.stdin.readline()
if command == "quit":
break
command.strip()
command_to_broker = process_command(command, sub_id)
if command_to_broker == error_message:
print("Command with wrong format!")
continue
sock.sendall(bytes(command_to_broker, ENCODING))
received = str(sock.recv(BUFFER_SIZE), ENCODING)
print("Received from BROKER: " + received)
except:
print("Error/Disconnect")
broker.py (starts a publisher thread)
def publisher_thread(connection, topics_and_subscribers, subscribers_and_ports, subscribers):
while True:
data = connection.recv(BUFFER_SIZE)
print("Command from PUBLISHER {}".format(data.decode(ENCODING)))
response = 'OK'
command = data.decode(ENCODING).split(" ", 3)
pub_id = command[0]
topic = command[2]
message = command[3]
if topic in topics_and_subscribers:
for s in subscribers:
print(s.getpeername())
sum = s.send(bytes(message, ENCODING))
print(sum)
if not data:
break
connection.sendall(bytes(response, ENCODING))
connection.close()

How can I use Python's asyncio queues to simulate threads?

I'm trying to simulate processing in threads by using asyncio.Queue. However, I'm struggling to turn a threaded processing simulation part to asynchronous loop.
So what my script does in brief: 1) receive processing requests over a websocket, 2) assign the request to the requested queue (which simulates a thread), 3) runs processing queues, which put responses into one shared response queue, and then 4) the websocket takes out the responses from the shared queue one by one and sends them out to the server.
Simplified version of my code:
# Initialize empty processing queues for the number of threads I want to simulate
processing_queues = [asyncio.Queue() for i in range(n_queues)
# Initialize shared response queue
response_q = asyncio.Queue()
# Set up a websocket context manager
async with websockets.connect(f"ws://{host}:{port}") as websocket:
while True:
# Read incoming requests
message = await websocket.recv()
# Parse mssg -> get request data and on which thread / queue to process it
request_data, queue_no = parse_message(message)
# Put the request data to the requested queue (imitating thread)
await processing_queues[queue_no].put(request_data)
# THIS IS WHERE I THINK ASYNCHRONY BREAKS (AND I NEED HELP)
# Do processing in each imitated processing thread
for proc_q in processing_queues:
if not proc_q.empty():
request_data = await proc_q.get()
# do the processing
response = process_data(request_data)
# Add the response to the response queue
await response_q.put(response)
# Send responses back to the server
if not response_q.empty():
response_data = response_q.get()
await websocket.send(response_data)
From the output of the script, I deduced that 1) I seem to receive requests and send out responses asynchronously; 2) processing in queues does not happen asynchronously. Correct me if I'm wrong.
I was reading about create_task() in asyncio. Maybe that could be a way to solve my problem?
I'm open to any solution (even hacky).
P.S. I would just use threads from threading library, but I need asyncio for websockets library.
P.P.S. Threaded version of my idea.
class ProcessingImitationThread(threading.Thread):
def __init__(self, thread_id, request_q, response_q):
threading.Thread.__init__(self)
self.thread_id = thread_id
self.request_q = request_q
self.response_q = response_q
def run(self):
while True:
try:
(x, request_id) = self.request_q.get()
except Empty:
time.sleep(0.2)
else:
if x == -1:
# EXIT CONDITION
break
else:
sleep_time_for_x = count_imitation(x, state)
time.sleep(sleep_time_for_x)
self.response_q.put(request_id)
print(f"request {request_id} executed")
# Set up
processing_qs = [queue.Queue() for i in range(n_processes_simulated)]
response_q = queue.Queue()
processing_thread_handlers = []
for i in n_processes_simulated:
# create thread
t = ProcessingImitationThread(i, processing_qs[i], response_q)
processing_thread_handlers.append(t)
# Main loop
while True:
# receive requests and assign to requested queue (so that thread picks up)
if new_request:
requested_process, x, request_id = parse(new_request)
processing_qs[requested_process].put((x, request_id))
...
# if there are any new responses, sent them out to the server
if response_q.q_size() > 0:
request_id = response_q.get()
# Networking: send to server
...
# Close down
...
EDIT: fixes small typos.
Your intuition that you need create_task is correct, as create_task is the closest async equivalent of Thread.start: it creates a task that runs in parallel (in an async sense) to whatever you are doing now.
You need separate coroutines that drain the respective queues running in parallel; something like this:
async def main():
processing_qs = [asyncio.Queue() for i in range(n_queues)]
response_q = asyncio.Queue()
async with websockets.connect(f"ws://{host}:{port}") as websocket:
processing_tasks = [
asyncio.create_task(processing(processing_q, response_q))
for processing_q in processing_qs
]
response_task = asyncio.create_task(
send_responses(websocket, response_q))
while True:
message = await websocket.recv()
requested_process, x, request_id = parse(message)
await processing_qs[requested_process].put((x, request_id))
async def processing(processing_q, response_q):
while True:
x, request_id = await processing_q.get()
... create response ...
await response_q.put(response)
async def send_responses(websocket, response_q):
while True:
msg = await response_q.get()
await websocket.send(msg)

How can you reset or cancel asyncio.sleep events?

I am trying to set up a bot in discord that works on timers. One of them will work if one person types in the '!challenge' command, in which the bot will wait for 60 seconds to see if anyone types the '!accept' command in response. If it does not, it states 'Challenge was ignored. Resetting.' Or something along those lines.
Another timer actually runs during the game itself, and is an hour long. However, the hour resets after a command has been put in. If the games is idle for an hour (One player DCs or quits) the bot resets the game itself.
I had this working with threading:
# Initiate a challenge to the room. Opponent is whoever uses the !accept command. This command should be
# unavailable for use the moment someone !accepts, to ensure no one trolls during a fight.
if message[:10] == "!challenge":
msg, opponent, pOneInfo, new_game, bTimer, playerOne = message_10_challenge(channel, charFolder, message, unspoiledArena, character, self.game)
if opponent is not "":
self.opponent = opponent
if pOneInfo is not None:
self.pOneInfo = pOneInfo
self.pOneTotalHP = self.pOneInfo['thp']
self.pOneCurrentHP = self.pOneInfo['thp']
self.pOneLevel = self.pOneInfo['level']
if new_game != 0:
self.game = new_game
if bTimer is True:
timeout = 60
self.timer = Timer(timeout, self.challengeTimeOut)
self.timer.start()
if playerOne is not "":
self.playerOne = playerOne
super().MSG(channel, msg)
and:
# Response to use to accept a challenge.
if message == "!accept":
msg, pTwoInfo, new_game, playerTwo, bTimer, bGameTimer, new_oppenent, token = message_accept(channel, charFolder, unspoiledArena, character, self.game, self.opponent, self.pOneInfo)
if not charFile.is_file():
super().MSG(channel, "You don't even have a character made to fight.")
else:
if new_game is not None:
self.game = new_game
if pTwoInfo is not None:
self.pTwoInfo = pTwoInfo
self.pTwoTotalHP = self.pTwoInfo['thp']
self.pTwoCurrentHP = self.pTwoInfo['thp']
self.pTwoLevel = self.pTwoInfo['level']
if bTimer:
self.timer.cancel()
if bGameTimer:
gametimeout = 3600
self.gameTimer = Timer(gametimeout, self.combatTimeOut)
self.gameTimer.start()
if new_oppenent is not None:
self.opponent = new_oppenent
if playerTwo is not None:
self.playerTwo = playerTwo
if token is not None:
self.token = token
for msg_item in msg:
super().MSG(channel, msg_item)
with functions:
def challengeTimeOut(self):
super().MSG(unspoiledArena, "Challenge was not accepted. Challenge reset.")
self.game = 0
def combatTimeOut(self):
super().PRI('Unspoiled Desire', "!reset")
The above is an example of the same game, but on a different chat platform, with threading to handle the timers. But threading and discord.py aren't friends I guess. So I am trying to get the above code to work with discord.py, which seems to use asyncio.
The thought was to use asyncio.sleep() in the
if bTimer is true:
await asyncio.sleep(60)
self.game = 0
await ctx.send("Challenge was not accepted. Challenge reset.")
And this works...but it doesn't stop the timer, so even if someone !accepts, thus changing bTimer to False, which would cancel the timer:
if bTimer:
self.timer.cancel()
it's still going to say "Challenge was not accepted. Challenge reset."
The same problem will occure with bGameTimer if I try:
if bGameTimer:
await asyncio.sleep(3600)
await ctx.send("!reset")
the game will be hardwired to reset after 1 hours time, no matter if the game is done or not. Rather than resetting the 1 hour timer after every turn, and ONLY resetting if a full hour has passed in which no commands are made.
Is there a way to easily cancel or reset sleep cycles?
The asyncio code equivalent to your threading code would be:
...
if bTimer:
self.timer = asyncio.create_task(self.challengeTimeout())
...
async def challengeTimeout(self):
await asyncio.sleep(60)
self.game = 0
await ctx.send("Challenge was not accepted. Challenge reset.")
asyncio.create_task() creates a light-weight "task" object roughly equivalent to a thread. It runs "in parallel" to the your other coroutines, and you can cancel it by invoking its cancel() method:
if bTimer:
self.timer.cancel()

ThreadPoolWorkers that will not die if they create a thread

In my Slack RTM message handler function, I am calling any one of a set of functions. Alone, each message event handler blocks the next, so I began to call these functions in new threads. After doing this, I cannot exit() my program, I am left with a ThreadPoolExecutor-x_x for each of the message handlers that were called.
Even if I set my threads to setDaemon=True and .join() to them, the ThreadPoolExecutors remain.
def exitFunc(sendfn, channel, thread, user, text, groups, groupdict, meta):
reply = 'bye'
sendfn(channel=channel, message = reply)
for thread in threading.enumerate():
print(thread.getName())
exit()
Produces this, and hangs:
MainThread
ThreadPoolExecutor-0_0
ThreadPoolExecutor-0_1
ThreadPoolExecutor-3_0
ThreadPoolExecutor-3_1
ThreadPoolExecutor-3_2
exitFunc
ThreadPoolExecutor-3_3
When I don't run the function as a new thread, these ThreadPoolExecutors seem to hang around but they do let my program exit.
Spawning the threads:
def __init__(self, token, username = None, icon_emoji = None, security = None):
...
slack.RTMClient.run_on(event='message')(self.readMessage)
def readMessage(self, **payload):
...
thread = Thread(target=fn['fn'], kwargs = fnargs)
thread.start()
the function passed in as sendfn:
def sendMessage(self, channel, message, thread = None, username = None, icon_emoji = None):
if username == None:
username = self.default_username
if icon_emoji == None:
icon_emoji = self.default_icon_emoji
print('{} Send to {}: {}'.format(str(datetime.now()), channel, message))
self.webclient.chat_postMessage(
channel=channel,
text=message,
username=username,
icon_emoji=icon_emoji,
thread_ts=thread
)
I am using slackclient 2.0.1 python package.
This was an XY problem.
It wasn't that my threads weren't dying, it was that I was trying to exit() from a thread.
Since exit() ultimately “only” raises an exception, it will only exit
the process when called from the main thread, and the exception is not
intercepted.
https://docs.python.org/2/library/sys.html#sys.exit
I moved my exit to the main thread and it exited fine.

Resources