Is there any way to put timeout in pandas read_sql function? - python-3.x

I connect to a DB2 server through ODBC connection in my python code. The DB2 server gets reboot for maintainence or disconnects me while running specific server side tasks, happens 1 or 2 times in a day. At that time if my code has started executing the pandas read_sql function to fetch result of query, it goes into a infinite wait even when the server is up after lets say 1 hour.
I want to put a timeout in the execution of read_sql and whenever that timeout occurs I want to refresh the connection with DB2 server so that a fresh connection is made again before continuing the query.
I have tried making a while loop and picking chunks of data from DB2 instead of pulling whole result at once, but problem is if DB2 disconnects in pulling chunk python code still goes into infinite wait.
chunk_size = 1000
offset = 0
while True:
sql = "SELECT * FROM table_name limit %d offset %d" % (chunk_size,offset)
df = pd.read_sql(sql, conn)
df.index += (offset+1)
offset += chunk_size
sys.stdout.write('.')
sys.stdout.flush()
if df.shape[0] < chunk_size:
break
I need the read_sql to throw some exception or return a value if the sql execution takes more than 3 minutes. If that happenes I need the connection to DB2 to refresh.

You could use the package func-timeout. You can install via pip as below:
pip install func-timeout
So, for example, if you have a function “doit(‘arg1’, ‘arg2’)” that you want to limit to running for 5 seconds, with func_timeout you can call it like this:
from func_timeout import func_timeout, FunctionTimedOut
try:
doitReturnValue = func_timeout(5, doit, args=(‘arg1’, ‘arg2’))
except FunctionTimedOut:
print ( “doit(‘arg1’, ‘arg2’) could not complete within 5 seconds, hence terminated.\n”)
except Exception as e:
# Handle any exceptions that doit might raise here

Related

QLDB Python Driver Error Handling with Lambda and SQS

We have a QLDB ingestion process that consists of a Lambda function triggered by SQS.
We want to make sure our pipeline is airtight so if a failure or error occurs during driver execution, we don't lose that data if the data fails to commit to QLDB.
In our testing we noticed that if there's a failure within the Lambda itself, it automatically resends the message to the queue, but if the driver fails, the data is lost.
I understand that the default behavior for the driver is to retry four times after the initial failure. My question is, if I wrap qldb_driver.execute_lambda() in a try statement, will that allow the driver to retry upon failure or will it instantly return as a failure and be handled by the except statement?
Here is how I've written the first half of the function:
import json
import boto3
import datetime
from pyqldb.driver.qldb_driver import QldbDriver
from utils import upsert, resend_to_sqs, delete_from_sqs
queue_url = 'https://sqs.XXX/'
sqs = boto3.client('sqs', region_name='us-east-1')
ledger = 'XXXXX'
table = 'XXXXX'
qldb_driver = QldbDriver(ledger_name = ledger, region_name='us-east-1')
def lambda_handler(event, context):
# Simple iterable to identify messages
i = 0
# Error flag
error = False
# Empty list to store message send status as well as body or receipt_handle
batch_messages = []
for record in event['Records']:
payload = json.loads(record["body"])
payload['update_ts'] = str(datetime.datetime.now())
try:
qldb_driver.execute_lambda(lambda executor: upsert(executor, ledger = ledger, table_name = table, data = payload))
# If the message sends successfully, give it status 200 and add the recipt_handle to our list
# so in case an error occurs later, we can delete this message from the queue.
message_info = {f'message_{i}': 200, 'receiptHandle': record['receiptHandle']}
batch_messages.append(message_info)
except Exception as e:
print(e)
# Flip error flag to True
error = True
# If the commit fails, set status 400 and add the message's body to our list.
# This will allow us to send the message back to the queue during error handling.
message_info = {f'message_{i}': 400, 'body': record['body']}
batch_messages.append(message_info)
i += 1
Assuming that this try/except allows the driver to retry upon failure, I've written an additional process to record message data from our batch to delete successful commits and send failures back to the queue:
# Begin error handling
if error:
count = 0
for j in range(len(batch_messages)):
# If a message was sent successfully delete it from the queue
if batch_messages[j][f'message_{j}'] == 200:
receipt_handle = batch_messages[j]['receiptHandle']
delete_from_sqs(sqs, queue_url, receipt_handle)
# If the message failed to commit to QLDB, send it back to the queue
else:
body = batch_messages[j]['body']
resend_to_sqs(sqs, queue_url, body)
count += 1
print(f"ERROR(S) DETECTED - {count} MESSAGES RETURNED TO QUEUE")
else:
print("BATCH PROCESSING SUCCESSFUL")
Thank you for your insight!
The qldb python driver can be configured for more or less retries if you need. I'm not sure if you wanted it to only try 1 time, or if you were asking that the driver will try the transaction 4 times before triggering the try/catch exception. The driver will still try up-to 4 times, before throwing the except.
You can follow the example here to modify the retry amount. Also, note the default retry timeout is a random ms jitter and not exponential. With QLDB, you shouldn't need to wait long periods to retry since it uses optimistic concurrency control.
Also, with your design of throwing the failed message back into the queue...you might want to consider throwing it into a dead letter queue. Dead-letter queues would prevent trouble messages from retrying indefinitely, unless thats your goal.
(edit/additionally)
Observe that the qldb driver exhausting retires before raising an exception.

Connection pool sqlalchemy always return a cached result

I use the following code to create connection pooling and run SQL query.
engine = sqlalchemy.create_engine(
f'mysql+pymysql://{usr}:{pw}#{db_config["host"]}:{db_config["port"]}/'
f'{sub_db}', pool_size=24, max_overflow=40, poolclass=pool.QueuePool)
connection = engine.connect()
pandas.read_sql(sql=my_query, con=connection)
However, I found that connection I created sometime returns a cached DB return:
E.g. I run SQL function validatePW to validate tokens based on a DB table. I have two users: A and B with different passwords. User A's password is 'PW1'.
If I run pandas.read_sql(sql="select validatePW('A', 'PW1')", con=connection), I got a True return which is as expected. and if I first run pandas.read_sql(sql="select validatePW('B', 'PW1')", con=connection), I will get a False return (B's password is not 'PW1').
Now if I run these two queries sequentially, the results will become funny
pd.read_sql(sql="select validatePW('A', 'PW1')", con=connection)
pd.read_sql(sql="select validatePW('B', 'PW1')", con=connection)
I will get two True return, although I am expecting the second query would return a False. It seems like connection is returning a cached result from the previous run. Why is that? Is there anyway to avoid it? Thanks

multi-threading - How to control print statements from function during thread calls in python

I am using ThreadPoolExecuter for parallel execution of a function which prints statements and executes the sql. I would like to manage the print statements from the function. Eg
def Func(host,sql):
print ('Executing for %s ' %host)
SQL = Execute(host,SQL) -- connecting to DB
print SQL
main():
sql = 'show databases;'
hostList = ['abc.com','def.com','ghi.com','jkl.com']
with concurrent.futures.ThreadPoolExecutor() as executor:
future = [executor.submit(Func,acct ,host,sql) for host in hostList]
Here for 4 items in hostList it executes the thread and executes the function Func in parallel but prints results like below
Executing for abc.com
Executing for def.com
Executing for ghi.com
Executing for jkl.com
then
SQL output 1
SQL output 2
SQL output 3
SQL output 4
How I would like the function to print is like below
Executing for abc.com
SQL output 1
Executing for def.com
SQL output 1
Executing for ghi.com
SQL output 1
Executing for jkl.com
SQL output 1
If you just want to group your print statements together without reflecting the pause required to execute then you can do the following. Note that if the ONLY thing you are doing is a single print statement then you likely don't need the lock.
import concurrent.futures
import threading
import random
import time
def Func(account, host, sql, lock):
seconds = random.randint(1, 10)
time.sleep(seconds)
result = "Result of executing \"{}\" took {} seconds".format(sql, seconds)
## -------------------------------
## Probably don't need to lock if you combine these into one statement
## -------------------------------
with lock:
print('Executing for %s ' % host)
print("\t{}\n".format(result))
## -------------------------------
lock = threading.Lock()
hostList = ['abc.com','def.com','ghi.com','jkl.com']
with concurrent.futures.ThreadPoolExecutor() as executor:
future = [executor.submit(Func, "acct" , host, "somesql", lock) for host in hostList]

Python3 pika channel.basic_consume() causing MySQL too many connections

I had using pika to make a connection to RabbitMQ and consume message, once I start the script on ubuntu prod environment it is working as expected but is opening mysql connection and never closes them and ends up in Too many connection on mysql server.
Will appreciate any recommendation on the code below, as well can not understand what is going wrong. Thanking you in advance.
The flow is the following
Starting pika on Python3
Subscribe to a channel and waiting for messages
In callback i do various validation and save or update data inside MySql
The result that is showing the problem is the at the end of question a screenshot from ubuntu htop, that is showing new connection on MySql and keep adding them on the top
Pika Verion = 0.13.0
For MySql I use pymysql.
Pika Script
def main():
credentials = pika.PlainCredentials(tunnel['queue']['name'], tunnel['queue']['password'])
while True:
try:
cp = pika.ConnectionParameters(
host=tunnel['queue']['host'],
port=tunnel['queue']['port'],
credentials=credentials,
ssl=tunnel['queue']['ssl'],
heartbeat=600,
blocked_connection_timeout=300
)
connection = pika.BlockingConnection(cp)
channel = connection.channel()
def callback(ch, method, properties, body):
if 'messageType' in properties.headers:
message_type = properties.headers['messageType']
if events.get(message_type):
result = Descriptors._reflection.ParseMessage(events[message_type]['decode'], body)
if result:
result = protobuf_to_dict(result)
model.write_response(external_response=result, message_type=message_type)
else:
app_log.warning('Message type not in allowed list = ' + str(message_type))
app_log.warning('continue listening...')
channel.basic_consume(callback, queue=tunnel['queue']['name'], no_ack=True)
try:
channel.start_consuming()
except KeyboardInterrupt:
channel.stop_consuming()
connection.close()
break
except pika.connection.exceptions.ConnectionClosed as e:
app_log.error('ConnectionClosed :: %s' % str(e))
continue
except pika.connection.exceptions.AMQPChannelError as e:
app_log.error('AMQPChannelError :: %s' % str(e))
continue
except Exception as e:
app_log.error('Connection was closed, retrying... %s' % str(e))
continue
if __name__ == '__main__':
main()
Inside the script i have a model that doing inserts or updated in the database, code below
def write_response(self, external_response, message_type):
table_name = events[message_type]['table_name']
original_response = external_response[events[message_type]['response']]
if isinstance(original_response, list):
external_response = []
for o in original_response:
record = self.map_keys(o, message_type, events[message_type].get('values_fix', {}))
external_response.append(self.validate_fields(record))
else:
external_response = self.map_keys(original_response, message_type, events[message_type].get('values_fix', {}))
external_response = self.validate_fields(external_response)
if not self.mysql.open:
self.mysql.ping(reconnect=True)
with self.mysql.cursor() as cursor:
if isinstance(original_response, list):
for e in external_response:
id_name = events[message_type]['id_name']
filters = {id_name: e[id_name]}
self.event(
cursor=cursor,
table_name=table_name,
filters=filters,
external_response=e,
message_type=message_type,
event_id=e[id_name],
original_response=e # not required here
)
else:
id_name = events[message_type]['id_name']
filters = {id_name: external_response[id_name]}
self.event(
cursor=cursor,
table_name=table_name,
filters=filters,
external_response=external_response,
message_type=message_type,
event_id=external_response[id_name],
original_response=original_response
)
cursor.close()
self.mysql.close()
return
On ubuntu i use systemd to run the script and restart in case something goes wrong, below is systemd file
[Unit]
Description=Pika Script
Requires=stunnel4.service
Requires=mysql.service
Requires=mongod.service
[Service]
User=user
Group=group
WorkingDirectory=/home/pika_script
ExecStart=/home/user/venv/bin/python pika_script.py
Restart=always
[Install]
WantedBy=multi-user.target
Image from ubuntu htop, how the MySql keeps adding in the list and never close it
Error
tornado_mysql.err.OperationalError: (1040, 'Too many connections')
i have found the issue, posting if will help somebody else.
the problem was that mysqld went into infinite loop trying to create indexing to a specific database, after found to which database was trying to create the indexes and never succeed and was trying again and again.
solution was to remove the database and recreate it, and the mysqld process went back to normal. and the infinite loop to create indexes dissapeared as well.
I would say increasing connection may solve your problem temperately.
1st find out why the application is not closing the connection after completion of task.
2nd Any slow queries/calls on the DB and fix them if any.
3rd considering no slow queries/calls on DB and also application is closing the connection/thread after immediately completing the task, then consider playing with "wait_timeout" on mysql side.
According to this answer, if you have MySQL 5.7 and 5.8 :
It is worth knowing that if you run out of usable disc space on your
server partition or drive, that this will also cause MySQL to return
this error. If you're sure it's not the actual number of users
connected then the next step is to check that you have free space on
your MySQL server drive/partition.
From the same thread. You can inspect and increase number of MySQL connections.

Mysql.connector Python, no result when connection is already used

I have a simple code which sends a message by http to another webapp and check that the message is well inserted in the database (2 times)
So it is not this code which insert in the database (it is done in another app)
SELECT_TABLE1_BY_ID_AND_DATE = "SELECT * FROM table1 WHERE table1.id = %s AND timedata = FROM_UNIXTIME(%s)"
SELECT_TABLE2_BY_ID_AND_DATE = "SELECT * FROM table2 WHERE table2.id = %s AND timedata = FROM_UNIXTIME(%s)"
try:
conn = mysql.connector.connect(user=db['user'], password=db['password'], host=db['host'], port=db['port'], database="TEST", raise_on_warnings=True)
cursor = conn.cursor()
self.send1Message(msg1) # Send to HTTP Webapp
cursor.execute(SELECT_TABLE1_BY_ID_AND_DATE, (idD, timing))
print(cursor.fetchall()) #1
self.send2Message(msg2) Send to HTTP Webapp
cursor.execute(SELECT_TABLE2_BY_ID_AND_DATE, (idD, timing))
print(cursor.fetchall()) #2
except mysql.connector.Error as err:
print("Something went wrong: {}".format(err))
If I use the same SQL connection between the 2 sendTable, only the first fetchAll returns data. (#1 prints data and #2 prints empty list).
I tried also to close the connection after #1 and start another connection. It works for both (#1 prints data and #2 too).
(I have to precise that my queries are correct and that the data is well insert in the database on time by the webapp).
Is it a normal behaviour of a connection ?
Thanks a lot!
Try 2 connections...
conn1 = mysql.connector.connect(user=db['user'], password=db['password'], host=db['host'], port=db['port'], database="TEST", raise_on_warnings=True)
conn2 = mysql.connector.connect(user=db['user'], password=db['password'], host=db['host'], port=db['port'], database="TEST", raise_on_warnings=True)
cursor1 = conn1.cursor()
self.send1Message(msg1) # Send to HTTP Webapp
cursor1.execute(SELECT_TABLE1_BY_ID_AND_DATE, (idD, timing))
print(cursor1.fetchall()) #1
self.send2Message(msg2) Send to HTTP Webapp
cursor2 = conn2.cursor()
cursor2.execute(SELECT_TABLE2_BY_ID_AND_DATE, (idD, timing))
print(cursor2.fetchall()) #2

Resources