Python check if exists in SQLite3 - python-3.x

I'm trying to check whether a variable exists in an SQLite3 db. Unfortunately I can not seem to get it to work. The airports table contains 3 colums, with ICAO as the first column.
if c.execute("SELECT EXISTS(SELECT 1 FROM airports WHERE ICAO='EHAM')") is True:
print("Found!")
else:
print("Not found...")
The code runs without any errors, but the result is always the same (not found).
What is wrong with this code?

Try this instead:
c.execute("SELECT EXISTS(SELECT 1 FROM airports WHERE ICAO='EHAM')")
if c.fetchone():
print("Found!")
else:
print("Not found...")
Return value of cursor.execute is cursor (or to be more precise reference to itself) and is independent of query results. You can easily check that:
>>> r = c.execute("SELECT EXISTS(SELECT 1 FROM airports WHERE ICAO='EHAM')")
>>> r is True
False
>>> r is False
False
>>> r is None
False
>>> r is c
True
From the other hand if you call cursor.fetchone result tuple or None if there is no row that passes query conditions. So in your case if c.fetchone(): would mean one of the below:
if (1, ):
...
or
if None:
...

Let's prepare a database to test it.
import sqlite3
c = sqlite3.connect(":memory:")
c.execute("CREATE TABLE airports (ICAO STRING, col2 STRING, col3 STRING)")
c.execute("INSERT INTO airports (ICAO, col2, col3) VALUES (?, ?, ?)", ('EHAM', 'value2', 'value3'))
Since your SELECT 1 FROM airports WHERE ICAO = 'EHAM' already serves the purpose of checking existence, let's use it directly, without the redundant SELECT EXISTS()
if c.execute("SELECT 1 FROM airports WHERE ICAO = 'EHAM'").fetchone():
print("Found!")
else:
print("Not found...")
the result is
Found!
Let's check a non-existent case
if c.execute("SELECT 1 FROM airports WHERE ICAO = 'NO-SUCH'").fetchone():
print("Found!")
else:
print("Not found...")
the result is
Not found...
If you just want to fix your code, you can try
if c.execute("SELECT EXISTS(SELECT 1 FROM airports WHERE ICAO = 'EHAM')").fetchone() == (1,):
print("Found!")
else:
print("Not found...")
the result is
Found!

Thanks for the answer from zero323, although the code snippet is wrong, as fetchone() does not return True or False. It only returns 1 for True and 0 for False. (binary) The following code works without problems in Python3:
response = self.connection.execute("SELECT EXISTS(SELECT 1 FROM invoices WHERE id=?)", (self.id, ))
fetched = response.fetchone()[0]
if fetched == 1:
print("Exist")
else:
print("Does not exist")

I don't have the reputation to comment. However, the comments and answers claiming that the top answer is incorrect are erroneous. In Python, 1 and 0 are synonymous with True and False, respectively. In fact, the substitution of 1 for True and 0 for False are very Pythonic, i.e. highly condoned in Python.
In short, the top answer of if c.fetchone(): is correct.
Checking for equality with 1, if c.fetchone() == 1:, is unnecessary and against Python best practices.

Related

How to parse any SQL get columns names and table name using SQL parser in python3

I am able to get the column names and table name from using sql parse for only simple select SQL's.
Can somebody help how can get the column names and table name from any complex SQL's.
Here is a solution for extracting column names from complex sql select statements. Python 3.9
import sqlparse
def get_query_columns(sql):
stmt = sqlparse.parse(sql)[0]
columns = []
column_identifiers = []
# get column_identifieres
in_select = False
for token in stmt.tokens:
if isinstance(token, sqlparse.sql.Comment):
continue
if str(token).lower() == 'select':
in_select = True
elif in_select and token.ttype is None:
for identifier in token.get_identifiers():
column_identifiers.append(identifier)
break
# get column names
for column_identifier in column_identifiers:
columns.append(column_identifier.get_name())
return columns
def test():
sql = '''
select
a.a,
replace(coalesce(a.b, 'x'), 'x', 'y') as jim,
a.bla as sally -- some comment
from
table_a as a
where
c > 20
'''
print(get_query_columns(sql))
test()
# outputs: ['a', 'jim', 'sally']
This is how you print the table name in sqlparse
1) Using SELECT statement
>>> import sqlparse
>>> print([str(t) for t in parse[0].tokens if t.ttype is None][0])
'dbo.table'
(OR)
2) Using INSERT statement:
def extract_tables(sql):
"""Extract the table names from an SQL statment.
Returns a list of (schema, table, alias) tuples
"""
parsed = sqlparse.parse(sql)
if not parsed:
return []
# INSERT statements must stop looking for tables at the sign of first
# Punctuation. eg: INSERT INTO abc (col1, col2) VALUES (1, 2)
# abc is the table name, but if we don't stop at the first lparen, then
# we'll identify abc, col1 and col2 as table names.
insert_stmt = parsed[0].token_first().value.lower() == "insert"
stream = extract_from_part(parsed[0], stop_at_punctuation=insert_stmt)
return list(extract_table_identifiers(stream))
The column names may be tricky because column names can be ambiguous or even derived. However, you can get the column names, sequence and type from virtually any query or stored procedure.
Until FROM keyword is encountered, all the column names are fetched.
def parse_sql_columns(sql):
columns = []
parsed = sqlparse.parse(sql)
stmt = parsed[0]
for token in stmt.tokens:
if isinstance(token, IdentifierList):
for identifier in token.get_identifiers():
columns.append(str(identifier))
if isinstance(token, Identifier):
columns.append(str(token))
if token.ttype is Keyword: # from
break
return columns

Postgres is putting NaN in null values

I am using psycopg2 on a python script
The script parses json files, and put them in a Postgres RDS.
When a value is missing on the json file, the script supposed to put skip the specific column
(so it supposed to inert null value in the table, but instead it puts NaN)
Has anybody encountered this issue?
The part that checks if the column is empty -
if (str(df.loc[0][col]) == "" or df.loc[0][col] is None or str(df.loc[0][col]) == 'None' or str(df.loc[0][col]) == 'NaN' or str(df.loc[0][col]) == 'null'):
df.drop(col, axis=1, inplace=True)
else:
cur.execute("call mrr.add_column_to_table('{0}', '{1}');".format(table_name, col))
The insertion part -
def copy_df_to_sql(df, conn, table_name):
if len(df) > 0:
df_columns = list(df)
columns = '","'.join(df_columns) # create (col1,col2,...)
# create VALUES('%s', '%s",...) one '%s' per column
values = "VALUES({})".format(",".join(["%s" for _ in df_columns]))
# create INSERT INTO table (columns) VALUES('%s',...)
emp = '"'
insert_stmt = 'INSERT INTO mrr.{} ({}{}{}) {}'.format(table_name, emp, columns, emp, values)
cur = conn.cursor()
import psycopg2.extras
psycopg2.extras.execute_batch(cur, insert_stmt, df.values)
conn.commit()
cur.close()
Ok, so the reason this is happening is probably because pandas is treating null values as NaN,
so when I insert a Dataframe into the table in inserts the null values as pandas null, which is NaN

After integer greater than 9 Incorrect number of bindings supplied. The current statement uses 1, and there are 2 supplied

def delete_Link(id):
connection = sql_Connect()
cursor = connection.cursor()
cursor.execute("DELETE FROM table WHERE id =?", str(id))
connection.commit()
After iterating over rows once the table id is greater than 9 I receive the following error
sqlite3.ProgrammingError: Incorrect number of bindings supplied. The current statement uses 1, and there are 2 supplied.
Change str(id) to (str(id), ) like in this example:
import sqlite3
def delete_link(id):
connection = sqlite3.connect('test.db')
cursor = connection.cursor()
cursor.execute("DELETE FROM t WHERE id =?", (str(id),))
connection.commit()
print('Deleted ' + str(id))
if __name__ == '__main__':
for id in range(1, 11):
delete_link(id)
Check out some examples of .execute() on https://docs.python.org/2/library/sqlite3.html. It shows how you can send a tuple to that method in parameterized query.
def delete_Link(id):
connection = sql_Connect()
cursor = connection.cursor()
cursor.execute("DELETE FROM table WHERE id =?", [id])
connection.commit()
Changed the original snippet to include [] instead of () which seems to have fixed the issue. Hope this helps some others.

Checking if python sqlite table is populated

Here is my code:
dbContent = cursor.execute("SELECT COUNT(*) FROM parse")
if dbContent is None:
# This should run the nested code if it worked.
Instead it runs the else statement which is not what should be happening.
I am not a python expert but I think your code is just not correct. Please try this instead:
cursor.execute("SELECT COUNT(*) FROM parse")
result=cursor.fetchone()
numRows=result[0]
if numRows == 0:
# run your code
dbContent = cursor.execute("SELECT COUNT(*) FROM parse")
rows = dbContent.fetchall()
if not rows:
...
Lists have implicit booleanness which you can use.

pymssql - SELECT works but UPDATE doesn't

import pymssql
import decimal
CONN = pymssql.connect(server='1233123123', user='s123', password='sa1231231', database='DBforTEST')
CURSOR = CONN.cursor()
"""it is good code. here is no problem"""
CURSOR.execute("SELECT ttt from test where w=2")
ROW = CURSOR.fetchone()
tmp = list()
tmp.append(ROW)
if ROW is None:
print("table has nothing")
else:
while ROW:
ROW = CURSOR.fetchone()
tmp.append(ROW)
print(tmp)
"""it works!"""
CURSOR.execute("""
UPDATE test
SET
w = 16
where ttt = 1
""")
"it doesnt works"
I'm using python 3.5 with pymssql.
In my code, SELECT state works, so I can guarantee the connection is perfect.
But the UPDATE state doesn't work in Python.
The same code works in SSMS.
What is the problem?
I guess SELECT state is only for read, so DB can provide Data, but UPDATE is modifying DB, so DB blocks it.
How can I solve it?
CONN.commit()
if autocommit is not set then you have to commit yourself.

Resources