Quart infinite/indefinite streaming response - python-3.x

I am trying to create a server (loosely) based on an old blog post to stream video with Quart.
To stream video to a client, it seems all I should need to do is have a route that returns a generator of frames. However, actually doing this results in a constant repeated message of socket.send() raised exception, and shows a broken image on the client. After that, the server does not appear to respond to further requests.
Using more inspiration from the original post, I tried returning a Response (using return Response(generator, mimetype="multipart/x-mixed-replace; boundary=frame").) This does actually display video on the client, but as soon as they disconnect (close the tab, navigate to another page, etc) the server begins spamming socket.send() raised exception again and does not respond to further requests.
My code is below.
# in app.py
from camera_opencv import Camera
import os
from quart import (
Quart,
render_template,
Response,
send_from_directory,
)
app = Quart(__name__)
async def gen(c: Camera):
for frame in c.frames():
# d_frame = cv_processing.draw_debugs_jpegs(c.get_frame()[1])
yield (b"--frame\r\nContent-Type: image/jpeg\r\n\r\n" + frame[0] + b"\r\n")
c_gen = gen(Camera(0))
#app.route("/video_feed")
async def feed():
"""Streaming route (img src)"""
# return c_gen
return Response(c_gen, mimetype="multipart/x-mixed-replace; boundary=frame")
# in camera_opencv.py
from asyncio import Event
import cv2
class Camera:
last_frame = []
def __init__(self, source: int):
self.video_source = source
self.cv2_cam = cv2.VideoCapture(self.video_source)
self.event = Event()
def set_video_source(self, source):
self.video_source = source
self.cv2_cam = cv2.VideoCapture(self.video_source)
async def get_frame(self):
await self.event.wait()
self.event.clear()
return Camera.last_frame
def frames(self):
if not self.cv2_cam.isOpened():
raise RuntimeError("Could not start camera.")
while True:
# read current frame
_, img = self.cv2_cam.read()
# encode as a jpeg image and return it
Camera.last_frame = [cv2.imencode(".jpg", img)[1].tobytes(), img]
self.event.set()
yield Camera.last_frame
self.cv2_cam.release()

This was originally an issue with Quart itself.
After a round of bugfixes to both Quart and Hypercorn, the code as posted functions as intended (as of 2018-11-13.)

Related

Python: Callback on the worker-queue not working

Apologies for the long post. I am trying to subscribe to rabbitmq queue and then trying to create a worker-queue to execute tasks. This is required since the incoming on the rabbitmq would be high and the processing task on the item from the queue would take 10-15 minutes to execute each time. Hence necessitating the need for a worker-queue. Now I am trying to initiate only 4 items in the worker-queue, and register a callback method for processing the items in the queue. The expectation is that my code handles the part when all the 4 instances in the worker-queue are busy, the new incoming would be blocked until a free slot is available.
The rabbitmq piece is working well. The problem is I cannot figure out why the items from my worker-queue are not executing the task, i.e the callback is not working. In fact, the item from the worker queue gets executed only once when the program execution starts. For the rest of the time, tasks keep getting added to the worker-queue without being consumed. Would appreciate it if somebody could help out with the understanding on this one.
I am attaching the code for rabbitmqConsumer, driver, and slaveConsumer. Some information has been redacted in the code for privacy issues.
# This is the driver
#!/usr/bin/env python
import time
from rabbitmqConsumer import BasicMessageReceiver
basic_receiver_object = BasicMessageReceiver()
basic_receiver_object.declare_queue()
while True:
basic_receiver_object.consume_message()
time.sleep(2)
#This is the rabbitmqConsumer
#!/usr/bin/env python
import pika
import ssl
import json
from slaveConsumer import slave
class BasicMessageReceiver:
def __init__(self):
# SSL Context for TLS configuration of Amazon MQ for RabbitMQ
ssl_context = ssl.SSLContext(ssl.PROTOCOL_TLSv1_2)
url = <url for the queue>
parameters = pika.URLParameters(url)
parameters.ssl_options = pika.SSLOptions(context=ssl_context)
self.connection = pika.BlockingConnection(parameters)
self.channel = self.connection.channel()
# worker-queue object
self.slave_object = slave()
self.slave_object.start_task()
def declare_queue(self, queue_name=“abc”):
print(f"Trying to declare queue inside consumer({queue_name})...")
self.channel.queue_declare(queue=queue_name, durable=True)
def close(self):
print("Closing Receiver")
self.channel.close()
self.connection.close()
def _consume_message_setup(self, queue_name):
def message_consume(ch, method, properties, body):
print(f"I am inside the message_consume")
message = json.loads(body)
self.slave_object.execute_task(message)
ch.basic_ack(delivery_tag=method.delivery_tag)
self.channel.basic_qos(prefetch_count=1)
self.channel.basic_consume(on_message_callback=message_consume,
queue=queue_name)
def consume_message(self, queue_name=“abc”):
print("I am starting the rabbitmq start_consuming")
self._consume_message_setup(queue_name)
self.channel.start_consuming()
#This is the slaveConsumer
#!/usr/bin/env python
import pika
import ssl
import json
import requests
import threading
import queue
import os
class slave:
def __init__(self):
self.job_queue = queue.Queue(maxsize=3)
self.job_item = ""
def start_task(self):
def _worker():
while True:
json_body = self.job_queue.get()
self._parse_object_from_queue(json_body)
self.job_queue.task_done()
threading.Thread(target=_worker, daemon=True).start()
def execute_task(self, obj):
print("Inside execute_task")
self.job_item = obj
self.job_queue.put(self.job_item)
# print(self.job_queue.queue)
def _parse_object_from_queue(self, json_body):
if bool(json_body[‘entity’]):
if json_body['entity'] == 'Hello':
print("Inside Slave: Hello")
elif json_body['entity'] == 'World':
print("Inside Slave: World")
self.job_queue.join()

Python-asyncio and subprocess deployment on IIS: returning HTTP response without running another script completely

I'm facing an issue in creating Realtime status update for merging new datasets with old one and machine learning model creation results via Web framework. The tasks are simple in following steps.
An user/ client will send a new datasets in .CSV file to the server,
On server side my windows machine will receive a file then send an acknowledge,
Merge the new dataset with the old one for new machine learning model creation and
Run another python script(that is to create a new sequential deep-learning model). After the successful completion of another python script my code have to return the response to the client!
I have deployed my python-flask application on IIS-10. To run an another python script, this main flask-api script should have to wait for completing that model creation script. On model creation python script it contains several process like loading datasets, tokenizing, oneHot Encoding, padding techniques, model training for 100 epochs and finally prediction results.
My exact goal is this Flask-API should have to wait for until completing the entire process. I'm sure definitely it will take 8-9 minutes to complete the whole script mentioned in subprocess.run(). While testing this code on development mode it's working excellently without any issues! But while testing it on production mode on IIS no it's not waiting for the whole process and within 6-7 seconds it returning response to the client.
For debugging purpose I included logging to record all events in both Flask script and machine learning model creation script! Through that I came to understand that model creation script only ran 10%!. First I tried simple methods with async def and await to run the subprocess.run() it didn't make any sense! Then I included threading and get_event_loop() and then run_until_complete() to make my parent code wait until finishing the whole process. But finally I'm helpless!! I couldn't able to find a rightful solution. Please let me know what I did wrong.. Thank you.
Configurations:
Python 3.7.9
Windows server 2019 and
IIS 10.0 Express
My code:
import os
import time
import glob
import subprocess
import pandas as pd
from flask import Flask, request, jsonify
from werkzeug.utils import secure_filename
from datetime import datetime
import logging
import asyncio
from concurrent.futures import ThreadPoolExecutor
ALLOWED_EXTENSIONS = {'csv', 'xlsx'}
_executor = ThreadPoolExecutor(1)
app = Flask(__name__)
app.config['UPLOAD_FOLDER'] = "C:\\inetpub\\wwwroot\\iAssist_IT_support\\New_IT_support_datasets"
currentDateTime = datetime.now()
filenames = None
logger = logging.getLogger(__name__)
app.logger.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s:%(name)s:%(message)s')
file_handler = logging.FileHandler('model-creation-status.log')
file_handler.setFormatter(formatter)
# stream_handler = logging.StreamHandler()
# stream_handler.setFormatter(formatter)
app.logger.addHandler(file_handler)
# app.logger.addHandler(stream_handler)
def allowed_file(filename):
return '.' in filename and filename.rsplit('.', 1)[1].lower() in ALLOWED_EXTENSIONS
#app.route('/file_upload')
def home():
return jsonify("Hello, This is a file-upload API, To send the file, use http://13.213.81.139/file_upload/send_file")
#app.route('/file_upload/status1', methods=['POST'])
def upload_file():
app.logger.debug("/file_upload/status1 is execution")
# check if the post request has the file part
if 'file' not in request.files:
app.logger.debug("No file part in the request")
response = jsonify({'message': 'No file part in the request'})
response.status_code = 400
return response
file = request.files['file']
if file.filename == '':
app.logger.debug("No file selected for uploading")
response = jsonify({'message': 'No file selected for uploading'})
response.status_code = 400
return response
if file and allowed_file(file.filename):
filename = secure_filename(file.filename)
file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
print(filename)
print(file)
app.logger.debug("Spreadsheet received successfully")
response = jsonify({'message': 'Spreadsheet uploaded successfully'})
response.status_code = 201
return response
else:
app.logger.debug("Allowed file types are csv or xlsx")
response = jsonify({'message': 'Allowed file types are csv or xlsx'})
response.status_code = 400
return response
#app.route('/file_upload/status2', methods=['POST'])
def status1():
global filenames
app.logger.debug("file_upload/status2 route is executed")
if request.method == 'POST':
# Get data in json format
if request.get_json():
filenames = request.get_json()
app.logger.debug(filenames)
filenames = filenames['data']
# print(filenames)
folderpath = glob.glob('C:\\inetpub\\wwwroot\\iAssist_IT_support\\New_IT_support_datasets\\*.csv')
latest_file = max(folderpath, key=os.path.getctime)
# print(latest_file)
time.sleep(3)
if filenames in latest_file:
df1 = pd.read_csv("C:\\inetpub\\wwwroot\\iAssist_IT_support\\New_IT_support_datasets\\" +
filenames, names=["errors", "solutions"])
df1 = df1.drop(0)
# print(df1.head())
df2 = pd.read_csv("C:\\inetpub\\wwwroot\\iAssist_IT_support\\existing_tickets.csv",
names=["errors", "solutions"])
combined_csv = pd.concat([df2, df1])
combined_csv.to_csv("C:\\inetpub\\wwwroot\\iAssist_IT_support\\new_tickets-chatdataset.csv",
index=False, encoding='utf-8-sig')
time.sleep(2)
# return redirect('/file_upload/status2')
return jsonify('New data merged with existing datasets')
#app.route('/file_upload/status3', methods=['POST'])
def status2():
app.logger.debug("file_upload/status3 route is executed")
if request.method == 'POST':
# Get data in json format
if request.get_json():
message = request.get_json()
message = message['data']
app.logger.debug(message)
return jsonify("New model training is in progress don't upload new file")
#app.route('/file_upload/status4', methods=['POST'])
def model_creation():
app.logger.debug("file_upload/status4 route is executed")
if request.method == 'POST':
# Get data in json format
if request.get_json():
message = request.get_json()
message = message['data']
app.logger.debug(message)
app.logger.debug(currentDateTime)
def model_run():
app.logger.debug("model script starts to run")
subprocess.run("python C:\\.....\\IT_support_chatbot-master\\"
"Python_files\\main.py", shell=True)
# time.sleep(20)
app.logger.debug("script ran successfully")
async def subprocess_call():
# run blocking function in another thread,
# and wait for it's result:
app.logger.debug("sub function execution starts")
await loop.run_in_executor(_executor, model_run)
asyncio.set_event_loop(asyncio.SelectorEventLoop())
loop = asyncio.get_event_loop()
loop.run_until_complete(subprocess_call())
loop.close()
return jsonify("Model created successfully for sent file %s" % filenames)
if __name__ == "__main__":
app.run()

How to encode and decode video frames as strings to send/receive over AWS websocket API integrated to AWS Lambda function using Python?

I have a deployed AWS websocket API Gateway integrated to an AWS lambda function, having a basic use case of a chat application between clients. Currently clients gets connected to the websocket with a clientid and send a message to the Lambda function as a json object API.
Lambda just routes the message it receives to the designated client, which is working perfectly fine for string messages being sent, however I want to send live video from one client to another using the same service, so I tried encoding the video frames into base64 strings and send over the API. I have tried a lot to send the live video but I am unable to receive anything on the other client.
Following is the Lambda function which is deployed:
import json
import boto3
import botocore
from datetime import datetime
from aws_requests_auth.aws_auth import AWSRequestsAuth
import requests
from boto3.dynamodb.conditions import Key, Attr
from decimal import Decimal
import os
def applambda(event, context):
connectionId=event["requestContext"]["connectionId"]
body=json.loads(event["body"])
ReceiverID=body["ReceiverID"]
Msg=body["Msg"]
MessageDateTime=str(datetime.now().timestamp())
dynamoConnections=boto3.resource('dynamodb').Table("OnlineConnection")
resultU=dynamoConnections.scan(FilterExpression=Key('ClientID').eq(ReceiverID))
if resultU["Items"] is not None and len(resultU["Items"])==1:
ReceiverConnectionID=resultU["Items"][0]["token"]
resultX=dynamoConnections.scan(FilterExpression=Key("token").eq(connectionId))
if resultX["Items"] is not None and len(resultX["Items"])==1:
SenderID=resultX["Items"][0]["ClientID"]
jsonObjtoSend={"MessageID":MessageDateTime,"Message":Msg,"SenderID":SenderID}
sendDirectMessage(ReceiverConnectionID,jsonObjtoSend)
return {"statusCode":200,"body":"Message Delivered"}
else:
return {"statusCode":200,"body":"Error Occurred"}
else:
return {"statusCode":200,"body":"User not Online"}
def sendDirectMessage(token, jsonobj):
access_key=os.environ['access_key']
secret_key=os.environ['secret_key']
auth=AWSRequestsAuth(aws_access_key=access_key,
aws_secret_access_key=secret_key,aws_host='<ws host>-east-1.amazonaws.com',
aws_region='us-east-1',aws_service='execute-api'
)
url='<AWS http url>'+token.replace("=","")+"%3D"
req=requests.post(url,auth=auth,data=str(jsonobj))
print(req.text)
Following is my client_stream.py file which streams data on the API,
import json
import pprint
import base64
import cv2
import websocket
from websocket import create_connection
websocket.enableTrace(True)
ws = create_connection('<ws host wih client query>')
camera = cv2.VideoCapture(0)
str_obj = {
"Msg":"",
"ReceiverID":"shd",
"action":"sendmsg"
}
while True:
try:
grabbed, frame = camera.read() # grab the current frame
frame = cv2.resize(frame, (640, 480)) # resize the frame
encoded, buffer = cv2.imencode('.jpg', frame)
buffer = buffer.tostring()
jpg_as_text = base64.b64encode(buffer)
str_obj["Msg"] = str(jpg_as_text)
ws.send(json.dumps(str_obj))
except KeyboardInterrupt:
# print(jpg_as_text)
camera.release()
cv2.destroyAllWindows()
break
Following is my client_viewer.py file which receives the message from Lambda:
import json
import pprint
import base64
import numpy as np
import cv2
import websocket
from websocket import create_connection
websocket.enableTrace(True)
ws = create_connection('<ws host wih client query>')
print("Connected")
while True:
try:
received = ws.recv()
#print(received)
if received is not None:
received = eval(received)
frame = received['Message']
img = base64.b64decode(frame)
npimg = np.fromstring(img, dtype=np.uint8)
source = cv2.imdecode(npimg, 1)
cv2.imshow("Stream", source)
cv2.waitKey(1)
except KeyboardInterrupt:
cv2.destroyAllWindows()
break
I am perfectly able to send any message to the other clients through the API,
{"Msg":"<message>", "ReceiverID":"shd", "action":"sendmsg"}
AWS websockets have very limited message size restrictions. Perhaps your messages are simply too large?
Try a google search for "aws websocket size limit".

Google cloud pubsub python synchronous pull

I have one topic and one subscription with multiple subscribers. My application scenario is I want to process messages on different subscribers with specific number of messages to be processed at a time. Means at first suppose 8 messages are processing then if one message processing done then after acknowledging processed message next message should take from the topic while taking care of no duplicate message to be found on any subscriber and every time 8 message should processed in the background.
For this I have use synchronous pull method with max_messages = 8 but next pulling is done after all messages process completed. So we have created own scheduler where at same time 8 process should be running at background and pulling 1 message at a time but still after all 8 message processing completed next message is delivered.
Here is my code:
#!/usr/bin/env python3
import logging
import multiprocessing
import time
import sys
import random
from google.cloud import pubsub_v1
project_id = 'xyz'
subscription_name = 'abc'
NUM_MESSAGES = 4
ACK_DEADLINE = 50
SLEEP_TIME = 20
multiprocessing.log_to_stderr()
logger = multiprocessing.get_logger()
logger.setLevel(logging.INFO)
def worker(msg):
logger.info("Received message:{}".format(msg.message.data))
random_sleep = random.randint(200,800)
logger.info("Received message:{} for {} sec".format(msg.message.data, random_sleep))
time.sleep(random_sleep)
def message_puller():
subscriber = pubsub_v1.SubscriberClient()
subscription_path = subscriber.subscription_path(project_id, subscription_name)
while(True):
try:
response = subscriber.pull(subscription_path, max_messages=1)
message = response.received_messages[0]
msg = message
ack_id = message.ack_id
process = multiprocessing.Process(target=worker, args=(message,))
process.start()
while process.is_alive():
# `ack_deadline_seconds` must be between 10 to 600.
subscriber.modify_ack_deadline(subscription_path,[ack_id],ack_deadline_seconds=ACK_DEADLINE)
time.sleep(SLEEP_TIME)
# Final ack.
subscriber.acknowledge(subscription_path, [ack_id])
logger.info("Acknowledging message: {}".format(msg.message.data))
except Exception as e:
print (e)
continue
def synchronous_pull():
p = []
for i in range(0,NUM_MESSAGES):
p.append(multiprocessing.Process(target=message_puller))
for i in range(0,NUM_MESSAGES):
p[i].start()
for i in range(0,NUM_MESSAGES):
p[i].join()
if __name__ == '__main__':
synchronous_pull()
Also for sometime subscriber.pull not pulling any messages even the while loop is always True. It gives me error as
list index (0) out of range
Concluding that subscriber.pull not pulling in message even messages are on the topic but after sometime it starts pulling. Why it is so?
I have tried with asynchronous pulling and flow control but duplicate message are found on multiple subscriber. If any other method will resolve my issue then let mi know. Thanks in advance.
Google Cloud PubSub ensures At least Once (docs). Which means, the messages may be delivered more than once. To tackle this, you need to make your program/system idempotent
You have multiple subscribers pulling 8 messages each.
To avoid the same message getting processed by multiple subscribers, acknowledge the message as soon as any subscriber pulls that message and proceeds further for processing rather than acknowledging it at the end, after the entire processing of the message.
Also, instead of running your main script continuously, use sleep for some constant time when there are no messages in the queue.
I had a similar code, where I used synchronous pull except I did not use parallel processing.
Here's the code:
PubSubHandler - Class to handle Pubsub related operations
from google.cloud import pubsub_v1
from google.api_core.exceptions import DeadlineExceeded
class PubSubHandler:
def __init__(self, subscriber_config):
self.project_name = subscriber_config['PROJECT_NAME']
self.subscriber_name = subscriber_config['SUBSCRIBER_NAME']
self.subscriber = pubsub_v1.SubscriberClient()
self.subscriber_path = self.subscriber.subscription_path(self.project_name,self.subscriber_name)
def pull_messages(self,number_of_messages):
try:
response = self.subscriber.pull(self.subscriber_path, max_messages = number_of_messages)
received_messages = response.received_messages
except DeadlineExceeded as e:
received_messages = []
print('No messages caused error')
return received_messages
def ack_messages(self,message_ids):
if len(message_ids) > 0:
self.subscriber.acknowledge(self.subscriber_path, message_ids)
return True
Utils - Class for util methods
import json
class Utils:
def __init__(self):
pass
def decoded_data_to_json(self,decoded_data):
try:
decoded_data = decoded_data.replace("'", '"')
json_data = json.loads(decoded_data)
return json_data
except Exception as e:
raise Exception('error while parsing json')
def raw_data_to_utf(self,raw_data):
try:
decoded_data = raw_data.decode('utf8')
return decoded_data
except Exception as e:
raise Exception('error converting to UTF')
Orcestrator - Main script
import time
import json
import logging
from utils import Utils
from db_connection import DbHandler
from pub_sub_handler import PubSubHandler
class Orcestrator:
def __init__(self):
self.MAX_NUM_MESSAGES = 2
self.SLEEP_TIME = 10
self.util_methods = Utils()
self.pub_sub_handler = PubSubHandler(subscriber_config)
def main_handler(self):
to_ack_ids = []
pulled_messages = self.pub_sub_handler.pull_messages(self.MAX_NUM_MESSAGES)
if len(pulled_messages) < 1:
self.SLEEP_TIME = 1
print('no messages in queue')
return
logging.info('messages in queue')
self.SLEEP_TIME = 10
for message in pulled_messages:
raw_data = message.message.data
try:
decoded_data = self.util_methods.raw_data_to_utf(raw_data)
json_data = self.util_methods.decoded_data_to_json(decoded_data)
print(json_data)
except Exception as e:
logging.error(e)
to_ack_ids.append(message.ack_id)
if self.pub_sub_handler.ack_messages(to_ack_ids):
print('acknowledged msg_ids')
if __name__ == "__main__":
orecestrator = Orcestrator()
print('Receiving data..')
while True:
orecestrator.main_handler()
time.sleep(orecestrator.SLEEP_TIME)

I am unable to simultaneously stream the feed from my RBP3's camera and record to a file at the same time using python

I know how to save to a file using the code below (and timestamp the feed) and I know how to stream using uv4l but I am simply too bad to do it simultaneously.
import time
time.sleep(60)
import picamera
import datetime as dt
camera = picamera.PiCamera()
camera.resolution = (640, 480)
#camera.vflip = True
camera.led = False
x = 0
while True:
bideoname = "/media/pi/cam/" + dt.datetime.now().strftime('%Y-%m-%d-%H') + ".h264"
camera.annotate_background = picamera.Color('black')
camera.annotate_text = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
#camera.start_preview()
camera.start_recording(bideoname)
start = dt.datetime.now()
while (dt.datetime.now() - start).seconds < 3600:
camera.annotate_text = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
camera.wait_recording(0.2)
camera.stop_recording()
x = x+1
I imagine I would use flask to to create local website to stream the feed to.
I have looked up and down the internets and this example seems to be the closest solution by Dave Jones but I don't if socket can communicate with a browser:
https://raspberrypi.stackexchange.com/questions/27041/record-and-stream-video-from-camera-simultaneously
Also there is this code which streams the camera feed to a page but no mention of how to simultanously record as well:
from flask import Flask, render_template, Response
# Raspberry Pi camera module (requires picamera package, developed by Miguel Grinberg)
from camera_pi import Camera
app = Flask(__name__)
#app.route('/')
def index():
"""Video streaming home page."""
return render_template('index.html')
def gen(camera):
"""Video streaming generator function."""
while True:
frame = camera.get_frame()
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n')
#app.route('/video_feed')
def video_feed():
"""Video streaming route. Put this in the src attribute of an img tag."""
return Response(gen(Camera()),
mimetype='multipart/x-mixed-replace; boundary=frame')
if __name__ == '__main__':
app.run(host='0.0.0.0', port =80, debug=True, threaded=True)
Or maybe this is all wrong and there is simpler solution to this?
Thanks for the help.

Resources