GEtting Error in Pyspark communication through RPC in RabbitMq - apache-spark

I am newly using RabbitMq for pyspark communication from remote pc through RPC.For testing purpose i have developed a test code which is giving me the error
I have followed RabbitMq doc tutorial for implementing RPC over pyspark
Here is my spark RPC server code
import pika
from tkinter import*
from pyspark.sql import SparkSession
from pyspark import SparkConf,SparkContext
import json
import re
connectionparam=pika.ConnectionParameters(host="localhost")
connection=pika.BlockingConnection(connectionparam)
channel=connection.channel()
channel.queue_declare(queue='rpc_queue')
spark=SparkSession.builder.config("spark.sql.warehouse.dir", "C:\spark\spark-warehouse")\
\
.appName("TestApp").\
enableHiveSupport().getOrCreate()
print("success")
#establishhing chraracter
#sqlstring="SELECT lflow1.LeaseType as LeaseType, lflow1.Status as Status, lflow1.Property as property, lflow1.City as City, lesflow2.DealType as DealType, lesflow2.Area as Area, lflow1.Did as DID, lesflow2.MID as MID from lflow1, lesflow2 WHERE lflow1.Did = lesflow2.MID"
def queryBuilder(sqlval):
print("printing",sqlval)
df=spark.sql(sqlval)
print("printing data frame table")
df.show()
resultlist = df.toJSON().collect()
dumpdata = re.sub(r"\'", "", str(resultlist))
jsondata = json.dumps(dumpdata)
#print(jsondata)
return jsondata
def on_request(ch,method,props, body):
n=body
print("printing request body ",n)
response=queryBuilder(n)
ch.basic_publish(exchange='',
routing_key=props.reply_to,
properties=pika.BasicProperties(correlation_id=props.correlation_id),
body=response
)
ch.basic_ack(delivery_tag=method.delivery_tag)
channel.basic_qos(prefetch_count=1)
channel.basic_consume(on_request,queue='rpc_queue')
print("[x] Awaiting RPC Request")
channel.start_consuming()
master=Tk()
entryval=Entry(master)
entryval.grid(row=0,column=1)
Button(master,text='Quit',command=master.quit).grid(row=3,column=1,sticky=W,pady=50)
mainloop()
and my following RPC client code for remote pyspark application is
import pika
import uuid
class SparkRpcClient(object):
def __init__(self):
self.connection = pika.BlockingConnection(pika.ConnectionParameters(
host='localhost'))
self.channel = self.connection.channel()
result = self.channel.queue_declare(exclusive=True)
self.callback_queue = result.method.queue
self.channel.basic_consume(self.on_response, no_ack=True,
queue=self.callback_queue)
def on_response(self, ch, method, props, body):
if self.corr_id == props.correlation_id:
self.response = body
def call(self, querymsg):
self.response = None
self.corr_id = str(uuid.uuid4())
self.channel.basic_publish(exchange='',
routing_key='rpc_queue',
properties=pika.BasicProperties(
reply_to = self.callback_queue,
correlation_id = self.corr_id,
),
body=querymsg)
while self.response is None:
self.connection.process_data_events()
return int(self.response)
sparkrpc = SparkRpcClient()
sqlstring="SELECT lflow1.LeaseType as LeaseType, lflow1.Status as Status, lflow1.Property as property, lflow1.City as City, lesflow2.DealType as DealType, lesflow2.Area as Area, lflow1.Did as DID, lesflow2.MID as MID from lflow1, lesflow2 WHERE lflow1.Did = lesflow2.MID"
print(" [x] Requesting query")
response = sparkrpc.call(sqlstring)
print(" [.] Got %s" % response)
My server has already received the request string from client and print it but it could not works on my querybuild() function which process the sqlstring and return json data. More over i have requested multiple times and it seems thats individual request has queued in rpc queue but not cleared out.Because if i run only server script i am getting same error. May be i am missing something here can anyone help me to figure it out. i just want to return json data to client
Thanks in advance
Kalyan

You're passing incompatible type (looks like either bytes or bytearray) where str is expected.
You should decode the content to string first.
def queryBuilder(sqlval, enc):
...
df = spark.sql(sqlval.decode(enc))
df.show()
...

Related

Python-asyncio and subprocess deployment on IIS: returning HTTP response without running another script completely

I'm facing an issue in creating Realtime status update for merging new datasets with old one and machine learning model creation results via Web framework. The tasks are simple in following steps.
An user/ client will send a new datasets in .CSV file to the server,
On server side my windows machine will receive a file then send an acknowledge,
Merge the new dataset with the old one for new machine learning model creation and
Run another python script(that is to create a new sequential deep-learning model). After the successful completion of another python script my code have to return the response to the client!
I have deployed my python-flask application on IIS-10. To run an another python script, this main flask-api script should have to wait for completing that model creation script. On model creation python script it contains several process like loading datasets, tokenizing, oneHot Encoding, padding techniques, model training for 100 epochs and finally prediction results.
My exact goal is this Flask-API should have to wait for until completing the entire process. I'm sure definitely it will take 8-9 minutes to complete the whole script mentioned in subprocess.run(). While testing this code on development mode it's working excellently without any issues! But while testing it on production mode on IIS no it's not waiting for the whole process and within 6-7 seconds it returning response to the client.
For debugging purpose I included logging to record all events in both Flask script and machine learning model creation script! Through that I came to understand that model creation script only ran 10%!. First I tried simple methods with async def and await to run the subprocess.run() it didn't make any sense! Then I included threading and get_event_loop() and then run_until_complete() to make my parent code wait until finishing the whole process. But finally I'm helpless!! I couldn't able to find a rightful solution. Please let me know what I did wrong.. Thank you.
Configurations:
Python 3.7.9
Windows server 2019 and
IIS 10.0 Express
My code:
import os
import time
import glob
import subprocess
import pandas as pd
from flask import Flask, request, jsonify
from werkzeug.utils import secure_filename
from datetime import datetime
import logging
import asyncio
from concurrent.futures import ThreadPoolExecutor
ALLOWED_EXTENSIONS = {'csv', 'xlsx'}
_executor = ThreadPoolExecutor(1)
app = Flask(__name__)
app.config['UPLOAD_FOLDER'] = "C:\\inetpub\\wwwroot\\iAssist_IT_support\\New_IT_support_datasets"
currentDateTime = datetime.now()
filenames = None
logger = logging.getLogger(__name__)
app.logger.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s:%(name)s:%(message)s')
file_handler = logging.FileHandler('model-creation-status.log')
file_handler.setFormatter(formatter)
# stream_handler = logging.StreamHandler()
# stream_handler.setFormatter(formatter)
app.logger.addHandler(file_handler)
# app.logger.addHandler(stream_handler)
def allowed_file(filename):
return '.' in filename and filename.rsplit('.', 1)[1].lower() in ALLOWED_EXTENSIONS
#app.route('/file_upload')
def home():
return jsonify("Hello, This is a file-upload API, To send the file, use http://13.213.81.139/file_upload/send_file")
#app.route('/file_upload/status1', methods=['POST'])
def upload_file():
app.logger.debug("/file_upload/status1 is execution")
# check if the post request has the file part
if 'file' not in request.files:
app.logger.debug("No file part in the request")
response = jsonify({'message': 'No file part in the request'})
response.status_code = 400
return response
file = request.files['file']
if file.filename == '':
app.logger.debug("No file selected for uploading")
response = jsonify({'message': 'No file selected for uploading'})
response.status_code = 400
return response
if file and allowed_file(file.filename):
filename = secure_filename(file.filename)
file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
print(filename)
print(file)
app.logger.debug("Spreadsheet received successfully")
response = jsonify({'message': 'Spreadsheet uploaded successfully'})
response.status_code = 201
return response
else:
app.logger.debug("Allowed file types are csv or xlsx")
response = jsonify({'message': 'Allowed file types are csv or xlsx'})
response.status_code = 400
return response
#app.route('/file_upload/status2', methods=['POST'])
def status1():
global filenames
app.logger.debug("file_upload/status2 route is executed")
if request.method == 'POST':
# Get data in json format
if request.get_json():
filenames = request.get_json()
app.logger.debug(filenames)
filenames = filenames['data']
# print(filenames)
folderpath = glob.glob('C:\\inetpub\\wwwroot\\iAssist_IT_support\\New_IT_support_datasets\\*.csv')
latest_file = max(folderpath, key=os.path.getctime)
# print(latest_file)
time.sleep(3)
if filenames in latest_file:
df1 = pd.read_csv("C:\\inetpub\\wwwroot\\iAssist_IT_support\\New_IT_support_datasets\\" +
filenames, names=["errors", "solutions"])
df1 = df1.drop(0)
# print(df1.head())
df2 = pd.read_csv("C:\\inetpub\\wwwroot\\iAssist_IT_support\\existing_tickets.csv",
names=["errors", "solutions"])
combined_csv = pd.concat([df2, df1])
combined_csv.to_csv("C:\\inetpub\\wwwroot\\iAssist_IT_support\\new_tickets-chatdataset.csv",
index=False, encoding='utf-8-sig')
time.sleep(2)
# return redirect('/file_upload/status2')
return jsonify('New data merged with existing datasets')
#app.route('/file_upload/status3', methods=['POST'])
def status2():
app.logger.debug("file_upload/status3 route is executed")
if request.method == 'POST':
# Get data in json format
if request.get_json():
message = request.get_json()
message = message['data']
app.logger.debug(message)
return jsonify("New model training is in progress don't upload new file")
#app.route('/file_upload/status4', methods=['POST'])
def model_creation():
app.logger.debug("file_upload/status4 route is executed")
if request.method == 'POST':
# Get data in json format
if request.get_json():
message = request.get_json()
message = message['data']
app.logger.debug(message)
app.logger.debug(currentDateTime)
def model_run():
app.logger.debug("model script starts to run")
subprocess.run("python C:\\.....\\IT_support_chatbot-master\\"
"Python_files\\main.py", shell=True)
# time.sleep(20)
app.logger.debug("script ran successfully")
async def subprocess_call():
# run blocking function in another thread,
# and wait for it's result:
app.logger.debug("sub function execution starts")
await loop.run_in_executor(_executor, model_run)
asyncio.set_event_loop(asyncio.SelectorEventLoop())
loop = asyncio.get_event_loop()
loop.run_until_complete(subprocess_call())
loop.close()
return jsonify("Model created successfully for sent file %s" % filenames)
if __name__ == "__main__":
app.run()

How to publish to Pub/Sub from Dataflow in batch (efficiently)?

I want to publish messages to a Pub/Sub topic with some attributes thanks to Dataflow Job in batch mode.
My dataflow pipeline is write with python 3.8 and apache-beam 2.27.0
It works with the #Ankur solution here : https://stackoverflow.com/a/55824287/9455637
But I think it could be more efficient with a shared Pub/Sub Client : https://stackoverflow.com/a/55833997/9455637
However an error occurred:
return StockUnpickler.find_class(self, module, name) AttributeError:
Can't get attribute 'PublishFn' on <module 'dataflow_worker.start'
from
'/usr/local/lib/python3.8/site-packages/dataflow_worker/start.py'>
Questions:
Would the shared publisher implementation improve beam pipeline performance?
Is there another way to avoid pickling error on my shared publisher client ?
My Dataflow Pipeline :
import apache_beam as beam
from apache_beam.io.gcp import bigquery
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
from google.cloud.pubsub_v1 import PublisherClient
import json
import argparse
import re
import logging
class PubsubClient(PublisherClient):
def __reduce__(self):
return self.__class__, (self.batch_settings,)
# The DoFn to perform on each element in the input PCollection.
class PublishFn(beam.DoFn):
def __init__(self):
from google.cloud import pubsub_v1
batch_settings = pubsub_v1.types.BatchSettings(
max_bytes=1024, # One kilobyte
max_latency=1, # One second
)
self.publisher = PubsubClient(batch_settings)
super().__init__()
def process(self, element, **kwargs):
future = self.publisher.publish(
topic=element["topic"],
data=json.dumps(element["data"]).encode("utf-8"),
**element["attributes"],
)
return future.result()
def run(argv=None, save_main_session=True):
"""Main entry point; defines and runs the pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument(
"--source_table_id",
dest="source_table_id",
default="",
help="BigQuery source table <project>.<dataset>.<table> with columns (topic, attributes, data)",
)
known_args, pipeline_args = parser.parse_known_args(argv)
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
# pipeline_options.view_as(SetupOptions).save_main_session = save_main_session
bq_source_table = known_args.source_table_id
bq_table_regex = r"^(?P<PROJECT_ID>[a-zA-Z0-9_-]*)[\.|\:](?P<DATASET_ID>[a-zA-Z0-9_]*)\.(?P<TABLE_ID>[a-zA-Z0-9_-]*)$"
regex_match = re.search(bq_table_regex, bq_source_table)
if not regex_match:
raise ValueError(
f"Bad BigQuery table id : `{bq_source_table}` please match {bq_table_regex}"
)
table_ref = bigquery.TableReference(
projectId=regex_match.group("PROJECT_ID"),
datasetId=regex_match.group("DATASET_ID"),
tableId=regex_match.group("TABLE_ID"),
)
with beam.Pipeline(options=pipeline_options) as p:
(
p
| "ReadFromBqTable" #
>> bigquery.ReadFromBigQuery(table=table_ref, use_json_exports=True) # Each row contains : topic / attributes / data
| "PublishRowsToPubSub" >> beam.ParDo(PublishFn())
)
if __name__ == "__main__":
logging.getLogger().setLevel(logging.INFO)
run()
After fussing with this a bit, I think I have an answer that works consistently and is, if not world-beatingly performant, at least tolerably usable:
import logging
import apache_beam as beam
from apache_beam.io.gcp.pubsub import PubsubMessage
from google.cloud.pubsub_v1 import PublisherClient
from google.cloud.pubsub_v1.types import (
BatchSettings,
LimitExceededBehavior,
PublishFlowControl,
PublisherOptions,
)
class PublishClient(PublisherClient):
"""
You have to override __reduce__ to make PublisherClient pickleable 😡 😤 🤬
Props to 'Ankur' and 'Benjamin' on SO for figuring this part out; god knows
I would not have...
"""
def __reduce__(self):
return self.__class__, (self.batch_settings, self.publisher_options)
class PubsubWriter(beam.DoFn):
"""
beam.io.gcp.pubsub does not yet support batch operations, so
we do this the hard way. it's not as performant as the native
pubsubio but it does the job.
"""
def __init__(self, topic: str):
self.topic = topic
self.window = beam.window.GlobalWindow()
self.count = 0
def setup(self):
batch_settings = BatchSettings(
max_bytes=1e6, # 1MB
# by default it is 10 ms, should be less than timeout used in future.result() to avoid timeout
max_latency=1,
)
publisher_options = PublisherOptions(
enable_message_ordering=False,
# better to be slow than to drop messages during a recovery...
flow_control=PublishFlowControl(limit_exceeded_behavior=LimitExceededBehavior.BLOCK),
)
self.publisher = PublishClient(batch_settings, publisher_options)
def start_bundle(self):
self.futures = []
def process(self, element: PubsubMessage, window=beam.DoFn.WindowParam):
self.window = window
self.futures.append(
self.publisher.publish(
topic=self.topic,
data=element.data,
**element.attributes,
)
)
def finish_bundle(self):
"""Iterate over the list of async publish results and block
until all of them have either succeeded or timed out. Yield
a WindowedValue of the success/fail counts."""
results = []
self.count = self.count + len(self.futures)
for fut in self.futures:
try:
# future.result() blocks until success or timeout;
# we've set a max_latency of 60s upstairs in BatchSettings,
# so we should never spend much time waiting here.
results.append(fut.result(timeout=60))
except Exception as ex:
results.append(ex)
res_count = {"success": 0}
for res in results:
if isinstance(res, str):
res_count["success"] += 1
else:
# if it's not a string, it's an exception
msg = str(res)
if msg not in res_count:
res_count[msg] = 1
else:
res_count[msg] += 1
logging.info(f"Pubsub publish results: {res_count}")
yield beam.utils.windowed_value.WindowedValue(
value=res_count,
timestamp=0,
windows=[self.window],
)
def teardown(self):
logging.info(f"Published {self.count} messages")
The trick is that if you call future.result() inside the process() method, you will block until that single message is successfully published, so instead collect a list of futures and then at the end of the bundle make sure they're all either published or definitively timed out. Some quick testing with one of our internal pipelines suggested that this approach can publish 1.6M messages in ~200s.

Google cloud pubsub python synchronous pull

I have one topic and one subscription with multiple subscribers. My application scenario is I want to process messages on different subscribers with specific number of messages to be processed at a time. Means at first suppose 8 messages are processing then if one message processing done then after acknowledging processed message next message should take from the topic while taking care of no duplicate message to be found on any subscriber and every time 8 message should processed in the background.
For this I have use synchronous pull method with max_messages = 8 but next pulling is done after all messages process completed. So we have created own scheduler where at same time 8 process should be running at background and pulling 1 message at a time but still after all 8 message processing completed next message is delivered.
Here is my code:
#!/usr/bin/env python3
import logging
import multiprocessing
import time
import sys
import random
from google.cloud import pubsub_v1
project_id = 'xyz'
subscription_name = 'abc'
NUM_MESSAGES = 4
ACK_DEADLINE = 50
SLEEP_TIME = 20
multiprocessing.log_to_stderr()
logger = multiprocessing.get_logger()
logger.setLevel(logging.INFO)
def worker(msg):
logger.info("Received message:{}".format(msg.message.data))
random_sleep = random.randint(200,800)
logger.info("Received message:{} for {} sec".format(msg.message.data, random_sleep))
time.sleep(random_sleep)
def message_puller():
subscriber = pubsub_v1.SubscriberClient()
subscription_path = subscriber.subscription_path(project_id, subscription_name)
while(True):
try:
response = subscriber.pull(subscription_path, max_messages=1)
message = response.received_messages[0]
msg = message
ack_id = message.ack_id
process = multiprocessing.Process(target=worker, args=(message,))
process.start()
while process.is_alive():
# `ack_deadline_seconds` must be between 10 to 600.
subscriber.modify_ack_deadline(subscription_path,[ack_id],ack_deadline_seconds=ACK_DEADLINE)
time.sleep(SLEEP_TIME)
# Final ack.
subscriber.acknowledge(subscription_path, [ack_id])
logger.info("Acknowledging message: {}".format(msg.message.data))
except Exception as e:
print (e)
continue
def synchronous_pull():
p = []
for i in range(0,NUM_MESSAGES):
p.append(multiprocessing.Process(target=message_puller))
for i in range(0,NUM_MESSAGES):
p[i].start()
for i in range(0,NUM_MESSAGES):
p[i].join()
if __name__ == '__main__':
synchronous_pull()
Also for sometime subscriber.pull not pulling any messages even the while loop is always True. It gives me error as
list index (0) out of range
Concluding that subscriber.pull not pulling in message even messages are on the topic but after sometime it starts pulling. Why it is so?
I have tried with asynchronous pulling and flow control but duplicate message are found on multiple subscriber. If any other method will resolve my issue then let mi know. Thanks in advance.
Google Cloud PubSub ensures At least Once (docs). Which means, the messages may be delivered more than once. To tackle this, you need to make your program/system idempotent
You have multiple subscribers pulling 8 messages each.
To avoid the same message getting processed by multiple subscribers, acknowledge the message as soon as any subscriber pulls that message and proceeds further for processing rather than acknowledging it at the end, after the entire processing of the message.
Also, instead of running your main script continuously, use sleep for some constant time when there are no messages in the queue.
I had a similar code, where I used synchronous pull except I did not use parallel processing.
Here's the code:
PubSubHandler - Class to handle Pubsub related operations
from google.cloud import pubsub_v1
from google.api_core.exceptions import DeadlineExceeded
class PubSubHandler:
def __init__(self, subscriber_config):
self.project_name = subscriber_config['PROJECT_NAME']
self.subscriber_name = subscriber_config['SUBSCRIBER_NAME']
self.subscriber = pubsub_v1.SubscriberClient()
self.subscriber_path = self.subscriber.subscription_path(self.project_name,self.subscriber_name)
def pull_messages(self,number_of_messages):
try:
response = self.subscriber.pull(self.subscriber_path, max_messages = number_of_messages)
received_messages = response.received_messages
except DeadlineExceeded as e:
received_messages = []
print('No messages caused error')
return received_messages
def ack_messages(self,message_ids):
if len(message_ids) > 0:
self.subscriber.acknowledge(self.subscriber_path, message_ids)
return True
Utils - Class for util methods
import json
class Utils:
def __init__(self):
pass
def decoded_data_to_json(self,decoded_data):
try:
decoded_data = decoded_data.replace("'", '"')
json_data = json.loads(decoded_data)
return json_data
except Exception as e:
raise Exception('error while parsing json')
def raw_data_to_utf(self,raw_data):
try:
decoded_data = raw_data.decode('utf8')
return decoded_data
except Exception as e:
raise Exception('error converting to UTF')
Orcestrator - Main script
import time
import json
import logging
from utils import Utils
from db_connection import DbHandler
from pub_sub_handler import PubSubHandler
class Orcestrator:
def __init__(self):
self.MAX_NUM_MESSAGES = 2
self.SLEEP_TIME = 10
self.util_methods = Utils()
self.pub_sub_handler = PubSubHandler(subscriber_config)
def main_handler(self):
to_ack_ids = []
pulled_messages = self.pub_sub_handler.pull_messages(self.MAX_NUM_MESSAGES)
if len(pulled_messages) < 1:
self.SLEEP_TIME = 1
print('no messages in queue')
return
logging.info('messages in queue')
self.SLEEP_TIME = 10
for message in pulled_messages:
raw_data = message.message.data
try:
decoded_data = self.util_methods.raw_data_to_utf(raw_data)
json_data = self.util_methods.decoded_data_to_json(decoded_data)
print(json_data)
except Exception as e:
logging.error(e)
to_ack_ids.append(message.ack_id)
if self.pub_sub_handler.ack_messages(to_ack_ids):
print('acknowledged msg_ids')
if __name__ == "__main__":
orecestrator = Orcestrator()
print('Receiving data..')
while True:
orecestrator.main_handler()
time.sleep(orecestrator.SLEEP_TIME)

Getting Interactive Brokers API into Pandas

New to Python and IB API and stuck on this simple thing. This application works correctly and prints IB server reply. However, I cannot figure out how to get this data into a panda's dataframe or any other variable for that matter. How do you "get the data out?" Thanks!
Nothing on forums, documentation or youtube that I can find with a useful example. I think the answer must be to return accountSummary to pd.Series, but no idea how.
Expected output would be a data series or variable that can be manipulated outside of the application.
from ibapi import wrapper
from ibapi.client import EClient
from ibapi.utils import iswrapper #just for decorator
from ibapi.common import *
import pandas as pd
class TestApp(wrapper.EWrapper, EClient):
def __init__(self):
wrapper.EWrapper.__init__(self)
EClient.__init__(self, wrapper=self)
#iswrapper
def nextValidId(self, orderId:int):
print("setting nextValidOrderId: %d", orderId)
self.nextValidOrderId = orderId
# here is where you start using api
self.reqAccountSummary(9002, "All", "$LEDGER")
#iswrapper
def error(self, reqId:TickerId, errorCode:int, errorString:str):
print("Error. Id: " , reqId, " Code: " , errorCode , " Msg: " , errorString)
#iswrapper
def accountSummary(self, reqId:int, account:str, tag:str, value:str, currency:str):
print("Acct Summary. ReqId:" , reqId , "Acct:", account,
"Tag: ", tag, "Value:", value, "Currency:", currency)
#IB API data returns here, how to pass it to a variable or pd.series
#iswrapper
def accountSummaryEnd(self, reqId:int):
print("AccountSummaryEnd. Req Id: ", reqId)
# now we can disconnect
self.disconnect()
def main():
app = TestApp()
app.connect("127.0.0.1", 4001, clientId=123)
test = app.accountSummary
app.run()
if __name__ == "__main__":
main()
Hi had the same problem and collections did it for me. Here is my code for CFDs data. Maybe it will help somebody. You will have your data in app.df. Any suggestion for improvement are more than welcome.
import collections
import datetime as dt
from threading import Timer
from ibapi.client import EClient
from ibapi.wrapper import EWrapper
from ibapi.contract import Contract
import pandas as pd
# get yesterday and put it to correct format yyyymmdd{space}{space}hh:mm:dd
yesterday = str(dt.datetime.today() - dt.timedelta(1))
yesterday = yesterday.replace('-','')
IP = '127.0.0.1'
PORT = 7497
class App(EClient, EWrapper):
def __init__(self):
super().__init__(self)
self.data = collections.defaultdict(list)
def error(self, reqId, errorCode, errorString):
print(f'Error {reqId}, {errorCode}, {errorString}')
def historicalData(self, reqId, bar):
self.data['date'].append(bar.date)
self.data['open'].append(bar.open)
self.data['high'].append(bar.high)
self.data['low'].append(bar.low)
self.data['close'].append(bar.close)
self.data['volume'].append(bar.volume)
self.df = pd.DataFrame.from_dict(self.data)
def stop(self):
self.done = True
self.disconnect()
# create App object
app = App()
print('App created...')
app.connect(IP, PORT, 0)
print('App connected...')
# create contract
contract = Contract()
contract.symbol = 'IBDE30'
contract.secType = 'CFD'
contract.exchange = 'SMART'
contract.currency = 'EUR'
print('Contract created...')
# request historical data for contract
app.reqHistoricalData(reqId=1,
contract=contract,
endDateTime=yesterday,
durationStr='1 W',
barSizeSetting='15 mins',
whatToShow='ASK',
useRTH=0,
formatDate=1,
keepUpToDate=False,
chartOptions=[])
Timer(4, app.stop).start()
app.run()
I'd store the data to a dictionary, create a dataframe from the dictionary, and append the new dataframe to the main dataframe using the concat function. Here's an example:
def accountSummary(self, reqId:int, account:str, tag:str, value:str, currency:str):
acct_dict = {"account": account, "value": value, "currency": currency}
acct_df = pd.DataFrame([acct_dict], columns=acct_dict.keys())
main_df = pd.concat([main_df, acct_df], axis=0).reset_index()
For more information, you might like Algorithmic Trading with Interactive Brokers

Quart infinite/indefinite streaming response

I am trying to create a server (loosely) based on an old blog post to stream video with Quart.
To stream video to a client, it seems all I should need to do is have a route that returns a generator of frames. However, actually doing this results in a constant repeated message of socket.send() raised exception, and shows a broken image on the client. After that, the server does not appear to respond to further requests.
Using more inspiration from the original post, I tried returning a Response (using return Response(generator, mimetype="multipart/x-mixed-replace; boundary=frame").) This does actually display video on the client, but as soon as they disconnect (close the tab, navigate to another page, etc) the server begins spamming socket.send() raised exception again and does not respond to further requests.
My code is below.
# in app.py
from camera_opencv import Camera
import os
from quart import (
Quart,
render_template,
Response,
send_from_directory,
)
app = Quart(__name__)
async def gen(c: Camera):
for frame in c.frames():
# d_frame = cv_processing.draw_debugs_jpegs(c.get_frame()[1])
yield (b"--frame\r\nContent-Type: image/jpeg\r\n\r\n" + frame[0] + b"\r\n")
c_gen = gen(Camera(0))
#app.route("/video_feed")
async def feed():
"""Streaming route (img src)"""
# return c_gen
return Response(c_gen, mimetype="multipart/x-mixed-replace; boundary=frame")
# in camera_opencv.py
from asyncio import Event
import cv2
class Camera:
last_frame = []
def __init__(self, source: int):
self.video_source = source
self.cv2_cam = cv2.VideoCapture(self.video_source)
self.event = Event()
def set_video_source(self, source):
self.video_source = source
self.cv2_cam = cv2.VideoCapture(self.video_source)
async def get_frame(self):
await self.event.wait()
self.event.clear()
return Camera.last_frame
def frames(self):
if not self.cv2_cam.isOpened():
raise RuntimeError("Could not start camera.")
while True:
# read current frame
_, img = self.cv2_cam.read()
# encode as a jpeg image and return it
Camera.last_frame = [cv2.imencode(".jpg", img)[1].tobytes(), img]
self.event.set()
yield Camera.last_frame
self.cv2_cam.release()
This was originally an issue with Quart itself.
After a round of bugfixes to both Quart and Hypercorn, the code as posted functions as intended (as of 2018-11-13.)

Resources