pyodbc.ProgrammingError: Attempt to use a closed cursor - python-3.x

I have this method on my class:
def exec_query(self, dbms, connection_string, sql):
self.dbms = dbms
self.connection_string = connection_string
self.sql = sql
self._connection, self._cursor = self._mydb(self.dbms, self.connection_string)
self._result = None
self.query_result = []
try:
self._cursor.execute(self.sql)
self.collected_data = self._cursor
except Exception as e:
raise e
self._cursor.close()
self._connection.close()
return self.collected_data
Then I tried to get it's return value outside the class. And I got this error:
pyodbc.ProgrammingError: Attempt to use a closed cursor.
Can't I assign the cursor to a variable? And why?
What I want to do is to process the cursor outside the class. Basically, I can just do .fetchall() and get the data then close the cursor. But .fetchall() eats the memory. So, I want to process the cursor outside.

self.collected_data is just another name for the exact same object as self._cursor. If you close self._cursor then self.collected_data is also closed.
You need to either use
self.collected_data = self.cursor.fetch_all()
(or whatever) to save the actual data, or leave the connection and cursor open, process your data outside the class, and then call a close method.
You may want to look at making your class usable as a context manager, then use it something like this:
with db_connection.exec_query(dbms, connection_string, sql) as data:
# process data
while data:
row in data.fetch_one()
do_something_with_row()
# on exit, cursor is closed
I probably have the particulars on the while data wrong, but hopefully you get the idea. Your exec_query would look something like this:
class exec_query:
def __init__(self, dbms, connection_string, sql):
self.dbms = dbms
self.connection_string = connection_string
self.sql = sql
self._connection, self._cursor = self._mydb(self.dbms, self.connection_string)
self._result = None
self.query_result = []
self._cursor.execute(self.sql)
def __enter__(self):
return self._cursor
def __exit__(self, *args):
self._cursor.close()
self._connection.close()

Related

Dart: Store heavy object in an isolate and access its method from main isolate without reinstatiating it

is it possible in Dart to instantiate a class in an isolate, and then send message to this isolate to receive a return value from its methods (instead of spawning a new isolate and re instantiate the same class every time)? I have a class with a long initialization, and heavy methods. I want to initialize it once and then access its methods without compromising the performance of the main isolate.
Edit: I mistakenly answered this question thinking python rather than dart. snakes on the brain / snakes on a plane
I am not familiar with dart programming, but it would seem the concurrency model has a lot of similarities (isolated memory, message passing, etc..). I was able to find an example of 2 way message passing with a dart isolate. There's a little difference in how it gets set-up, and the streams are a bit simpler than python Queue's, but in general the idea is the same.
Basically:
Create a port to receive data from the isolate
Create the isolate passing it the port it will send data back on
Within the isolate, create the port it will listen on, and send the other end of it back to main (so main can send messages)
Determine and implement a simple messaging protocol for remote method call on an object contained within the isolate.
This is basically duplicating what a multiprocessing.Manager class does, however it can be helpful to have a simplified example of how it can work:
from multiprocessing import Process, Lock, Queue
from time import sleep
class HeavyObject:
def __init__(self, x):
self._x = x
sleep(5) #heavy init
def heavy_method(self, y):
sleep(.2) #medium weight method
return self._x + y
def HO_server(in_q, out_q):
ho = HeavyObject(5)
#msg format for remote method call: ("method_name", (arg1, arg2, ...), {"kwarg1": 1, "kwarg2": 2, ...})
#pass None to exit worker cleanly
for msg in iter(in_q.get, None): #get a remote call message from the queue
out_q.put(getattr(ho, msg[0])(*msg[1], **msg[2])) #call the method with the args, and put the result back on the queue
class RMC_helper: #remote method caller for convienience
def __init__(self, in_queue, out_queue, lock):
self.in_q = in_queue
self.out_q = out_queue
self.l = lock
self.method = None
def __call__(self, *args, **kwargs):
if self.method is None:
raise Exception("no method to call")
with self.l: #isolate access to queue so results don't pile up and get popped off in possibly wrong order
print("put to queue: ", (self.method, args, kwargs))
self.in_q.put((self.method, args, kwargs))
return self.out_q.get()
def __getattr__(self, name):
if not name.startswith("__"):
self.method = name
return self
else:
super().__getattr__(name)
def child_worker(remote):
print("child", remote.heavy_method(5)) #prints 10
sleep(3) #child works on something else
print("child", remote.heavy_method(2)) #prints 7
if __name__ == "__main__":
in_queue = Queue()
out_queue = Queue()
lock = Lock() #lock is used as to not confuse which reply goes to which request
remote = RMC_helper(in_queue, out_queue, lock)
Server = Process(target=HO_server, args=(in_queue, out_queue))
Server.start()
Worker = Process(target=child_worker, args=(remote, ))
Worker.start()
print("main", remote.heavy_method(3)) #this will *probably* start first due to startup time of child
Worker.join()
with lock:
in_queue.put(None)
Server.join()
print("done")

Trying to figure out how to pass variables from one class to another in python while calling a class from a dictionary

So I am getting used to working with OOP in python, it has been a bumpy road but so far things seem to be working. I have, however hit a snag and i cannot seem to figure this out. here is the premise.
I call a class and pass 2 variables to it, a report and location. From there, I need to take the location variable, pass it to a database and get a list of filters it is supposed to run through, and this is done through a dictionary call. Finally, once that dictionary call happens, i need to take that report and run it through the filters. here is the code i have.
class Filters(object):
def __init__ (self, report, location):
self.report = report
self.location = location
def get_location(self):
return self.location
def run(self):
cursor = con.cursor()
filters = cursor.execute(filterqry).fetchall()
for i in filters:
f = ReportFilters.fd.get(i[0])
f.run()
cursor.close()
class Filter1(Filters):
def __init__(self):
self.f1 = None
''' here is where i tried super() and Filters.__init__.() etc.... but couldn't make it work'''
def run(self):
'''Here is where i want to run the filters but as of now i am trying to print out the
location and the report to see if it gets the variables.'''
print(Filters.get_location())
class ReportFilters(Filters):
fd = {
'filter_1': Filter1(),
'filter_2': Filter2(),
'filter_3': Filter3()
}
My errors come from the dictionary call, as when i tried to call it as it is asking for the report and location variables.
Hope this is clear enough for you to help out with, as always it is duly appreciated.
DamnGroundHog
The call to its parent class should be defined inside the init function and you should pass the arguments 'self', 'report' and 'location' into init() and Filters.init() call to parent class so that it can find those variables.
If the error is in the Filters1 class object, when you try to use run method and you do not see a location or a report variable passed in from parent class, that is because you haven't defined them when you instantiated those object in ReportFilters.fd
It should be:
class ReportFilters(Filters):
fd = {
'filter_1': Filter1(report1, location1),
'filter_2': Filter2(report2, location2),
'filter_3': Filter3(report3, location3)
}
class Filter1(Filters):
def __init__(self, report, location):
Filters.__init__(self, report, location)
self.f1 = None
def run(self):
print(self.get_location())

using SQLite 3 with python

I am trying to implement a database in my python 3 program. I am using SQLite 3. I don't really understand how to use my DBHelper class.
In order to use my DBHelper, I would need to instantiate a DBHelper object and call a function (insert, etc.). However, each time I instantiate an object, a new connection is made to my database.
I am confused because it looks like I am connecting to the database multiple times, when I feel like I should only be connecting once at the start of the program. But if I don't instantiate a DBHelper object, I cannot use the functions that I need.
Having multiple connections like this also sometimes locks my database.
What is the correct way to implement SQLite in my program?
Edit: I need to use the same sql db file across multiple other classes
import sqlite3
class DBHelper:
def __init__(self, dbname="db.sqlite"):
self.dbname = dbname
try:
self.conn = sqlite3.connect(dbname)
except sqlite3.Error as e:
log().critical('local database initialisation error: "%s"', e)
def setup(self):
stmt = "CREATE TABLE IF NOT EXISTS users (id integer PRIMARY KEY)"
self.conn.execute(stmt)
self.conn.commit()
def add_item(self, item):
stmt = "INSERT INTO users (id) VALUES (?)"
args = (item,)
try:
self.conn.execute(stmt, args)
self.conn.commit()
except sqlite3.IntegrityError as e:
log().critical('user id ' + str(item) + ' already exists in database')
def delete_item(self, item):
stmt = "DELETE FROM users WHERE id = (?)"
args = (item,)
self.conn.execute(stmt, args)
self.conn.commit()
def get_items(self):
stmt = "SELECT id FROM users"
return [x[0] for x in self.conn.execute(stmt)]
You can use singleton design pattern in your code. You instantiate your connection once, and each time you call __init__ it will return the same connection. For more information go to here.
Remember, if you are accessing the connection using concurrent workflows, you have to either implement safe access to the database connection inside DBHelper. Read SQLite documents for more information.

flask-sqlalchemy - how to obtain a request-independent db session

I am looking at the best (and correct way) to obtain a request-independent db session.
The problem is the following: I am building a web application that has to access the database. The endpoint exposed accepts a request, performs the first work, then create a thread (that will perform the hard work), starts it, and replies to the client with a unique id for the "job". Meanwhile the thread goes on with its work (and it has to access the database) and the client can perform polling to check the status. I am not using dedicated framework to perform this background job, but only a simple thread. I can only have one single background thread going on at any time, for this reason I am maintaining the state in a singleton.
The application is created with the application factory design https://flask.palletsprojects.com/en/1.1.x/patterns/appfactories/
I am using Gunicorn as WSGI server and sqlite as database.
The basic structure of the code is the following (I am removing the business logic and imports, but the concept remain):
api_jobs.py
#bp.route('/jobs', methods=['POST'])
def create_job():
data = request.get_json(force=True) or {}
name = data['name']
job_controller = JobController() # This is a singleton
job_process = job_controller.start_job(name)
job_process_dict = job_process.to_dict()
return jsonify(job_process_dict)
controller.py
class Singleton(type):
_instances = {}
def __call__(cls, *args, **kwargs):
if cls not in cls._instances:
cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs)
return cls._instances[cls]
class JobController(object):
__metaclass__ = Singleton
def __init__(self):
self.job_thread = None
def start_job(self, name):
if self.job_thread is not None:
job_id = self.job_thread.job_id
job_process = JobProcess.query.get(job_id)
if job_process.status != 'end':
raise ValueError('A job process is already ongoing!')
else:
self.job_thread = None
job_process = JobProcess(name)
db.session.add(job_process)
db.session.commit() # At this step I create the ID
self.job_thread = JobThread(db.session, job_process.id)
self.job_thread.start()
return job_process
class JobThread(threading.Thread):
def __init__(self, db_session, job_id):
self.job_id = job_id
self.db_session = db_session
self.session = self.db_session()
def run(self):
self.job_process = self.session.query(JobProcess).get(self.job_id)
self.job_process.status = 'working'
self.session.commit()
i = 0
while True:
sleep(1)
print('working hard')
i = i +1
if i > 10:
break
self.job_process.status = 'end'
self.session.commit()
self.db_session.remove()
models.py
class JobProcess(db.Model):
id = db.Column(db.Integer, primary_key=True)
status = db.Column(db.String(64))
name = db.Column(db.String(64))
def to_dict(self):
data = {
'id': self.id,
'status': self.status,
'name': self.name,
}
return data
From my understanding, calling self.session = self.db_session() is actually doing nothing (due to the fact that sqlalchemy is using a registry, that is also a proxy, if I am not wrong), however that was the best attempt that I found to create a "new/detached/useful" session.
I checked out https://docs.sqlalchemy.org/en/13/orm/contextual.html#using-thread-local-scope-with-web-applications in order to obtain a request-independent db-session, however even using the suggested method of creating a new session factory (sessionmaker + scoped_session), does not work.
The errors that I obtain, with slight changes to the code, are multiple, in this configuration the error is
DetachedInstanceError: Instance <JobProcess at 0x7f875f81c350> is not bound to a Session; attribute refresh operation cannot proceed (Background on this error at: http://sqlalche.me/e/bhk3)
The basic question remains: Is it possible to create a session that will live inside the thread and that I will take care of creating/tearing down?
The reason that you are encountering the DetachedInstanceError is that you are attempting to pass the session from your main thread to your job thread. Sqlalchemy is using thread local storage to manage the sessions and thus a single session cannot be shared between two threads. You just need to create a new session in the run method of your job thread.

Python neo4j bolt driver loses connection after context switch

I have a backend written in django which uses the neo4j bolt driver to communicate with the neo4j graph db.
I use a Singleton to handle the connection and the bolt driver closes the connection, whenever I access it from another location than where the connection was initially established (e.g. I open the connection in a view, access it in a signal and when I try to save in the view the connection is lost).
I’ve tried to extract the main problem I have come up with and break it down to a small piece of example code below.
I would appreciate any explanation of behavior to or even better a solution ;)
from neo4j.v1 import Driver, GraphDatabase, basic_auth, Session, Transaction
def main():
gm = GraphMapper()
gm.begin_atomic_transaction()
print(f"graph connection closed before method? {gm.is_connection_closed()}") # -> false
fill_transaction() #the context switch
print(f"graph connection closed after method? {gm.is_connection_closed()}") # -> true
if not gm.is_connection_closed():
print(f"graph connection open - try to commit") # -> is never called
gm.commit_atomic_transaction_and_close_session()
def fill_transaction():
gm = GraphMapper()
print(f"graph connection closed in method? {gm.is_connection_closed()}") # -> true
gm.create_update_node("TestNode")
class GraphMapper:
__instance = None
__transaction = None # type: Transaction
__session = None # type: Session
__connection = None # type: Connection
__driver = None # type: Driver
def __new__(cls, *args, **kwargs):
if not isinstance(cls.__instance, cls):
cls.__instance = object.__new__(cls, *args, **kwargs)
return cls.__instance
def __init__(self):
self.__driver = GraphDatabase.driver("bolt://localhost:7687", auth=basic_auth("neo4j", "password"))
def is_connection_closed(self):
return self.__transaction.session._connection._closed
def begin_atomic_transaction(self):
self.__session = self.__driver.session()
self.__transaction = self.__session.begin_transaction()
self.__connection = self.__transaction.session._connection
return self.__transaction
def commit_atomic_transaction_and_close_session(self):
result = self.__transaction.commit()
self.__transaction = None
return result
def create_update_node(self, label):
# Add Cypher statement to transaction
Implementation details: I have a wrapper object "GraphMapper" which encapsulates the connection, session, and transaction of the driver. and is designed as a singleton instance. A transaction is established at a point (A, e.g. a view) but I cannot complete the transaction here. I need to add additional values from location (B, e.g. a post-save signal). However, I cannot pass a reference to the "GraphMapper" A to B. Thus, I came up with the singleton implementation as explained above.
I have ensured that the singleton is exact the same instance on all locations (within one request). But at the moment I exit the context (package, class or method) through a method call and retrieve the "GraphMapper" instance at the very next location, the connection is closed. I even checked the reference count to the "GraphMapper" and its connection and the garbage collector should not delete it. Seldom it says the connection is not closed. But writing to the graph results in a connection refused error.
P.S.: I know there is some useless and unnecessary code, this is for illustrative purposes only and I wanted to make sure that the garbage collector did not kill some objects.

Resources