I have been given a .db file, that has already been populated with both Tables and Data. However, no description of the content of the database has been made available.
Is there a way for me to retrieve individual lists listing the different tables, and their respective sets of columns using SQLite3 and python?
This code help you to show tables with keys , when you get tables and their keys you can get data.
import sqlite3
def readDb():
connection = sqlite3.connect('data.db')
connection.row_factory = sqlite3.Row
cursor = connection.cursor()
cursor.execute("SELECT name FROM sqlite_master WHERE type='table';")
rows = cursor.fetchall()
tabs=[]
for row in rows:
for r in row:
tabs.append(r)
d={}
for tab in tabs:
cursor.execute("SELECT * FROM "+tab+";")
rows = cursor.fetchone()
t=[]
for row in rows.keys():
t.append(row)
d[tab]=t
connection.commit()
return d
print(readDb())
Related
I'm using the hdbcli package to load data from SAP HANA.
Problem: When loading data, I only get the value rows without the actual headers of the SQL table.
When I load only 3 columns (as below), I can manually add them myself, even though it is very ugly. This becomes impossible when I execute a Select * statement, as I really don't want to have to add them manually and might not know when there is a change.
Question: Is there a flag / command to get the column headers from a table?
Code-MRE:
#Initialize your connection
conn = dbapi.connect(
address='00.0.000.00',
port='39015',
user='User',
password='Password',
encrypt=True,
sslValidateCertificate=False
)
cursor = conn.cursor()
sql_command = "select TITLE, FIRSTNAME, NAME from HOTEL.CUSTOMER;"
cursor.execute(sql_command)
rows = cursor.fetchall() # returns only data, not the column values
for row in rows:
for col in row:
print ("%s" % col, end=" ")
print (" ")
cursor.close()
conn.close()
Thanks to #astentx' comment I found a solution:
cursor = conn.cursor()
sql_command = "select TITLE, FIRSTNAME, NAME from HOTEL.CUSTOMER;"
cursor.execute(sql_command)
rows = cursor.fetchall() # returns only data, not the column headers
column_headers = [i[0] for i in cursor.description] # get column headers
cursor.close()
conn.close()
result = [[column_header]] # insert header
for row in rows: # insert rows
current_row = []
for cell in row:
current_row.append(cell)
result.append(current_row)
I'm trying to delete a row from my pysimplegui table that will also delete the same row data from my sqlite3 database. Using events, I've tried to use the index eg. -TABLE- {'-TABLE-': [1]} to index the row position using values['-TABLE-'] like so:
if event == 'Delete':
row_index = 0
for num in values['-TABLE-']:
row_index = num + 1
c.execute('DELETE FROM goals WHERE item_id = ?', (row_index,))
conn.commit()
window.Element('-TABLE-').Update(values=get_table_data())
I realized that this wouldn't work since I'm using a ROW_ID in my database that Auto-increments with every new row of data and stays fixed like so (this is just to show how my database is set up):
conn = sqlite3.connect('goals.db')
c = conn.cursor()
c.execute('''CREATE TABLE goals (item_id INTEGER PRIMARY KEY, goal_name text, goal_type text)''')
conn.commit()
conn.close()
Is there a way to use the index ( values['-TABLE-'] ) to find the data inside the selected row in pysimplegui and then using the selected row's data to find the row in my sqlite3 database to delete it, or is there any other way of doing this that I'm not aware of?
////////////////////////////////////////
FIX:
Upon more reading into the docs I discovered a .get() method. This method returns a nested list of all Table Rows, the method is callable on the element of '-TABLE-'. Using values['-TABLE-'] I can also find the row index and use the .get() method to index the specific list where the Data lays which I want to delete.
Here is the edited code that made it work for me:
if event == 'Delete':
row_index = 0
for num in values['-TABLE-']:
row_index = num
# Returns nested list of all Table rows
all_table_vals = window.element('-TABLE-').get()
# Index the selected row
object_name_deletion = all_table_vals[row_index]
# [0] to Index the goal_name of my selected Row
selected_goal_name = object_name_deletion[0]
c.execute('DELETE FROM goals WHERE goal_name = ?', (selected_goal_name,))
conn.commit()
window.Element('-TABLE-').Update(values=get_table_data())
Here is a small example to delete a row from table
import sqlite3
def deleteRecord():
try:
sqliteConnection = sqlite3.connect('SQLite_Python.db')
cursor = sqliteConnection.cursor()
print("Connected to SQLite")
# Deleting single record now
sql_delete_query = """DELETE from SqliteDb_developers where id = 6"""
cursor.execute(sql_delete_query)
sqliteConnection.commit()
print("Record deleted successfully ")
cursor.close()
except sqlite3.Error as error:
print("Failed to delete record from sqlite table", error)
finally:
if (sqliteConnection):
sqliteConnection.close()
print("the sqlite connection is closed")
deleteRecord()
In your case id will me the name of any column name which has unique value for every row in thetable of the database
I have just started learning SQLite and was creating a project which has a .sqlite file in which there are multiple tables. I want to ask the user to input the table_name and then the program will fetch the columns present in that particular table.
So far I have done this.
app_database.py
def column_names(table_name):
conn = sqlite3.connect('northwind_small.sqlite')
c = conn.cursor()
c.execute("PRAGMA table_info(table_name)")
columns = c.fetchall()
for c in columns :
print(c[1])
conn.commit()
conn.close()
our-app.py
import app_database
table_name = input("Enter the table name = ")
app_database.column_names(table_name)
when I run our-app.py I don't get anything.
C:\Users\database-project>python our-app.py
Enter the table name = Employee
C:\Users\database-project>
Can anyone tell me how should I proceed?
I'm creating new table then inserting values in it because the tsv file doesn't have headers so i need to create table structure first then insert the value. I'm trying to insert the value in database table which is been created. I'm using df.to_sql function to insert tsv values into database table but its creating table but it's not inserting values in that table and its not giving any type of error either.
I have tried to create new table through sqalchemy and insert value it worked but it didn't worked for already created table.
conn, cur = create_conn()
engine = create_engine('postgresql://postgres:Shubham#123#localhost:5432/walmart')
create_query = '''create table if not exists new_table(
"item_id" TEXT, "product_id" TEXT, "abstract_product_id" TEXT,
"product_name" TEXT, "product_type" TEXT, "ironbank_category" TEXT,
"primary_shelf" TEXT, apparel_category" TEXT, "brand" TEXT)'''
cur.execute(create_query)
conn.commit()
file_name = 'new_table'
new_file = "C:\\Users\\shubham.shinde\\Desktop\\wallll\\new_file.txt"
data = pd.read_csv(new_file, delimiter="\t", chunksize=500000, error_bad_lines=False, quoting=csv.QUOTE_NONE, dtype="unicode", iterator=True)
with open(file_name + '_bad_rows.txt', 'w') as f1:
sys.stderr = f1
for df in data:
df.to_sql('new_table', engine, if_exists='append')
data.close()
I want to insert values from df.to_sql() into database table
Not 100% certain if this argument works with postgresql, but I had a similar issue when doing it on mssql. .to_sql() already creates the table in the first argument of the method in new_table. The if_exists = append also doesn't check for duplicate values. If data in new_file is overwritten, or run through your function again, it will just add to the table. As to why you're seeing the table name, but not seeing the data in it, might be due to the size of the df. Try setting fast_executemany=True as the second argument of the create_engine.
My suggestion, get rid of create_query, and handle the data types after to_sql(). Once the SQL table is created, you can use your actual SQL table, and join against this staging table for duplicate testing. The non-duplicates can be written to the actual table, converting datatypes on UPDATE to match the tables data type structure.
I have to code on python sqlite3 a function to count rows of a table.
The thing is that the user should input the name of that table once the function is executed.
So far I have the following. However, I don't know how to "connect" the variable (table) with the function, once it's executed.
Any help would be great.
Thanks
def RT():
import sqlite3
conn= sqlite3.connect ("MyDB.db")
table=input("enter table name: ")
cur = conn.cursor()
cur.execute("Select count(*) from ?", [table])
for row in cur:
print str(row[0])
conn.close()
Columns and Tables Can't be Parameterized
As explained in this SO answer, Columns and tables can't be parameterized. A fact that might not be documented by any authoritative source (I couldn't find one, so if you you know of one please edit this answer and/or the one linked above), but instead has been learned through people trying exactly what was attempted in the question.
The only way to dynamically insert a column or table name is through standard python string formatting:
cur.execute("Select count(*) from {0}".format(table))
Unfortunately This opens you up to the possibility of SQL injection
Whitelist Acceptable Column/Table Names
This SO answer explains that you should use a whitelist to check against acceptable table names. This is what it would look like for you:
import sqlite3
def RT():
conn = sqlite3.connect ("MyDB.db")
table = input("enter table name: ")
cur = conn.cursor()
if table not in ['user', 'blog', 'comment', ...]:
raise ... #Include your own error here
execute("Select count(*) from {0}".format(table))
for row in cur:
print str(row[0])
conn.close()
The same SO answer cautions accepting submitted names directly "because the validation and the actual table could go out of sync, or you could forget the check." Meaning, you should only derive the name of the table yourself. You could do this by making a clear distinction between accepting user input and the actual query. Here is an example of what you might do.
import sqlite3
acceptable_table_names = ['user', 'blog', 'comment', ...]
def RT():
"""
Client side logic: Prompt the user to enter table name.
You could also give a list of names that you associate with ids
"""
table = input("enter table name: ")
if table in acceptable_table_names:
table_index = table_names.index(table)
RT_index(table_index)
def RT_index(table_index):
"""
Backend logic: Accept table index instead of querying user for
table name.
"""
conn = sqlite3.connect ("MyDB.db")
cur = conn.cursor()
table = acceptable_table_names[table_index]
execute("Select count(*) from {0}".format(table))
for row in cur:
print str(row[0])
conn.close()
This may seem frivolous, but this keeps the original interface while addressing the potential problem of forgetting to check against a whitelist. The validation and the actual table could still go out of sync; you'll need to write tests to fight against that.