Use a function from one script in another - python-3.x

I have a function that connects to a database thats predefined. The code looks like:
file1.py
def conn_db():
try:
cursor = pymysql.cursors.DictCursor
conn = pymysql.connect(host=settings.host,
user=settings.user,
password=settings.password,
db=settings.database,
port=settings.port,
cursorclass=cursor)
dbcursor = conn.cursor()
except pymysql.err.OperationalError as e:
print("Unable to make a connection to the mysql database. \n Error was:{}".format(e))
return conn, dbcursor
How can I use this function conn_db from file1.py in file2.py. And then call file2.py from executing python's intrepreter via python ?
Having a hard time even identifying something so basic, after several attempts.
Thank you.

You can use import file1 and then use file1.conn_db() to use the function.

Related

How to create a postgresql function in SQLAchemy DDL

everyone!
I wondered how I could create a function in PostgreSQL every time I create or replace a table on my database. I can't find an example that works for my case. So I tried to pass a string with the create command like this:
from sqlalchemy import create_engine
engine = create_engine('/path/to/db...')
conn = engine.connect()
func = 'CREATE OR REPLACE FUNCTION my_func() RETURN SETOF....'
conn.execute(func)
I got a Syntax error running the above code just before the "RETURN SETOF...". So, I try to do something like this:
from sqlalchemy import create_engine
engine = create_engine('/path/to/db...')
conn = engine.connect()
func = DDL('CREATE OR REPLACE FUNCTION my_func()'
'RETURN SETOF....')
func.execute_if(dialect='postgresql')
I know I'm missing something here, but I could not find out what's missing.
The syntax error was RETURN rather than RETURNS.

How to import psycopg2 into a new module im writing

So to simplify, I'm trying to write my own module (test.py) that looks as follows:
import psycopg2
get_data(xyz):
connection = psycopg2.connect(user="",
password="",
host="",
port="",
database="")
last_qry = """select * from xyz.abc"""
cursor = connection.cursor()
cursor.execute(last_qry)
last_data = cursor.fetchone()
cursor.close()
connection.close()
return last_data
in a different file I am running:
import test
get_data(xyz)
and I get the following error:
name 'psycopg2' is not defined
What am I doing wrong?
There are many bugs in these code snippets that you put here:
Your import should be like this:
from test import get_data
or in this way:
import test
test.get_data()
What is the use of xyz ? the second code snippet must return
NameError because xyz is not define;if you want to use it in last_qry you must have .format() for it.
What is the structure of directory that included the second file ?
and where is the first file?

Launching parallel tasks: Subprocess output triggers function asynchronously

The example I will describe here is purely conceptual so I'm not interested in solving this actual problem.
What I need to accomplish is to be able to asynchronously run a function based on a continuous output of a subprocess command, in this case, the windows ping yahoo.com -t command and based on the time value from the replies I want to trigger the startme function. Now inside this function, there will be some more processing done, including some database and/or network-related calls so basically I/O processing.
My best bet would be that I should use Threading but for some reason, I can't get this to work as intended. Here is what I have tried so far:
First of all I tried the old way of using Threads like this:
import subprocess
import re
import asyncio
import time
import threading
def startme(mytime: int):
print(f"Mytime {mytime} was started!")
time.sleep(mytime) ## including more long operation functions here such as database calls and even some time.sleep() - if possible
print(f"Mytime {mytime} finished!")
myproc = subprocess.Popen(['ping', 'yahoo.com', '-t'], shell=True, stdout=subprocess.PIPE)
def main():
while True:
output = myproc.stdout.readline()
if myproc.poll() is not None:
break
myoutput = output.strip().decode(encoding="UTF-8")
print(myoutput)
mytime = re.findall("(?<=time\=)(.*)(?=ms\s)", myoutput)
try:
mytime = int(mytime[0])
if mytime < 197:
# startme(int(mytime[0]))
p1 = threading.Thread(target=startme(mytime), daemon=True)
# p1 = threading.Thread(target=startme(mytime)) # tried with and without the daemon
p1.start()
# p1.join()
except:
pass
main()
But right after startme() fire for the first time, the pings stop showing and they are waiting for the startme.time.sleep() to finish.
I did manage to get this working using the concurrent.futures's ThreadPoolExecutor but when tried to replace the time.sleep() with the actual database query I found out that my startme() function will never complete so no Mytime xxx finished! message is ever shown nor any database entry is being made.
import sqlite3
import subprocess
import re
import asyncio
import time
# import threading
# import multiprocessing
from concurrent.futures import ThreadPoolExecutor
from concurrent.futures import ProcessPoolExecutor
conn = sqlite3.connect('test.db')
c = conn.cursor()
c.execute(
'''CREATE TABLE IF NOT EXISTS mytable (id INTEGER PRIMARY KEY, u1, u2, u3, u4)''')
def startme(mytime: int):
print(f"Mytime {mytime} was started!")
# time.sleep(mytime) ## including more long operation functions here such as database calls and even some time.sleep() - if possible
c.execute("INSERT INTO mytable VALUES (null, ?, ?, ?, ?)",(1,2,3,mytime))
conn.commit()
print(f"Mytime {mytime} finished!")
myproc = subprocess.Popen(['ping', 'yahoo.com', '-t'], shell=True, stdout=subprocess.PIPE)
def main():
while True:
output = myproc.stdout.readline()
myoutput = output.strip().decode(encoding="UTF-8")
print(myoutput)
mytime = re.findall("(?<=time\=)(.*)(?=ms\s)", myoutput)
try:
mytime = int(mytime[0])
if mytime < 197:
print(f"The time {mytime} is low enought to call startme()" )
executor = ThreadPoolExecutor()
# executor = ProcessPoolExecutor() # I did tried using process even if it's not a CPU-related issue
executor.submit(startme, mytime)
except:
pass
main()
I did try using asyncio but I soon realized this is not the case but I'm wondering if I should try aiosqlite
I also thought about using asyncio.create_subprocess_shell and run both as parallel subprocesses but can't think of a way to wait for a certain string from the ping command that would trigger the second script.
Please note that I don't really need a return from the startme() function and the ping command example is conceptually derived from the mitmproxy's mitmdump output command.
The first code wasn't working as I did a stupid mistake when creating the thread so p1 = threading.Thread(target=startme(mytime)) does not take the function with its arguments but separately like this p1 = threading.Thread(target=startme, args=(mytime,))
The reason why I could not get the SQL insert statement to work in my second code was this error:
SQLite objects created in a thread can only be used in that same thread. The object was created in thread id 10688 and this is thread id 17964
that I didn't saw until I wrapped my SQL statement into a try/except and captured the error. So I needed to make the SQL database connection inside my startme() function
The other asyncio stuff was just nonsense and cannot be applied to the current issue here.

Python 3 Try/Except best practice?

This is a question about code structure, rather than syntax. I'd like to know what is best practice and why.
Imagine you've got a python programme. You've got a main class like so:
import my_db_manager as dbm
def main_loop():
people = dbm.read_db()
#TODO: Write rest of programme doing something with the data from people...
if __name__ == '__main__':
main_loop()
Then you've got several separate .py files for managing interactions with various tables in a database. One of these .py files, my_db_manager looks like this:
def read_db():
people = []
db = connection_manager.get_connection()
cursor = db.cursor()
try:
#Database reading statement
sql = 'SELECT DISTINCT(name) FROM people'
cursor.execute(sql)
results = cursor.fetchall()
people = [x[0] for x in results]
except Exception as e:
print(f'Error: {e}')
finally:
return people
In the example above the function read_db() is called from main_loop() in the main class. read_db() contains try/except clauses to manage errors in interacting with the database. While this works fine, The try/except clauses could instead be placed in main_loop() when calling read_db(). They could equally be located in both places. What is 'best practice' when using try/except? using try/except in the db_manager or using it in the main_loop() where you're managing the programmes logic flow, or using it in both places? Bear in mind I'm giving the above specific example but I'm trying to extrapolate a general rule for applying try/except when writing python.
The best way to write try-except -- in Python or anywhere else -- is as narrow as possible. It's a common problem to catch more exceptions than you meant to handle! (Credits: This Answer)
In your particular case, of course inside the function. It:
a) creates abstraction from db errors for the main thread
b) eliminates the suspicion of db errors in case some other thing caused exception inside of main thread (tho you can see but still it plucks out every last chance)
c) enables you to deal with all database related errors at one place. Efficiently and creatively. How are you gonna make list people outside of function in the main thread. It will double the mess.
Finally you should stick to this minimalism even inside the function although while catering every place where exception could occur as:
def read_db():
#Database reading statement
sql = 'SELECT DISTINCT(name) FROM people'
try:
db = connection_manager.get_connection()
cursor = db.cursor()
cursor.execute(sql)
results = cursor.fetchall()
except Exception as e:
print(f'Error: {e}')
return []
else:
return [x[0] for x in results]

using SQLite 3 with python

I am trying to implement a database in my python 3 program. I am using SQLite 3. I don't really understand how to use my DBHelper class.
In order to use my DBHelper, I would need to instantiate a DBHelper object and call a function (insert, etc.). However, each time I instantiate an object, a new connection is made to my database.
I am confused because it looks like I am connecting to the database multiple times, when I feel like I should only be connecting once at the start of the program. But if I don't instantiate a DBHelper object, I cannot use the functions that I need.
Having multiple connections like this also sometimes locks my database.
What is the correct way to implement SQLite in my program?
Edit: I need to use the same sql db file across multiple other classes
import sqlite3
class DBHelper:
def __init__(self, dbname="db.sqlite"):
self.dbname = dbname
try:
self.conn = sqlite3.connect(dbname)
except sqlite3.Error as e:
log().critical('local database initialisation error: "%s"', e)
def setup(self):
stmt = "CREATE TABLE IF NOT EXISTS users (id integer PRIMARY KEY)"
self.conn.execute(stmt)
self.conn.commit()
def add_item(self, item):
stmt = "INSERT INTO users (id) VALUES (?)"
args = (item,)
try:
self.conn.execute(stmt, args)
self.conn.commit()
except sqlite3.IntegrityError as e:
log().critical('user id ' + str(item) + ' already exists in database')
def delete_item(self, item):
stmt = "DELETE FROM users WHERE id = (?)"
args = (item,)
self.conn.execute(stmt, args)
self.conn.commit()
def get_items(self):
stmt = "SELECT id FROM users"
return [x[0] for x in self.conn.execute(stmt)]
You can use singleton design pattern in your code. You instantiate your connection once, and each time you call __init__ it will return the same connection. For more information go to here.
Remember, if you are accessing the connection using concurrent workflows, you have to either implement safe access to the database connection inside DBHelper. Read SQLite documents for more information.

Resources