Postgres/Python: Executed successfully but no rows inserted - python-3.x

Could somebody pls help me to perform insert operation to postgres database from python?
I have a dataframe:
df_routes
From
To
A
B
B
A
for index, row in df_routes.iterrows():
cursor.execute("insert into table(id, version, create_ts, created_by, station_from_code, station_to_code, station_from_icao_code, station_to_icao_code) select newid() as id, 0 as version, current_timestamp as create_ts, 'source_py' as created_by, ds_dep.station_code as station_from_code, ds_arr.station_code as station_to_code, ds_dep.icao_code as station_from_icao_code, ds_arr.icao_code as station_to_icao_code from dictionary ds_dep join dictionary ds_arr on ds_dep.station_code = %s and ds_arr.station_code = %s and ds_dep.delete_ts is null and ds_arr.delete_ts is null where not exists (select null from tsp_ams_navigation_route nr where nr.station_from_code = %s and nr.station_to_code = %s and nr.delete_ts is null)",(row.station_from_code, row.station_to_code, row.station_from_code, row.station_to_code ))
conn.commit()
print('ROUTES inserted '+ row.station_from_code + '- ' + row.station_to_code)
This piece of code does not work. Execution is successful, but no rows inserted. Please assist me.
Thanks!

Related

Cannot update existing row on conflict in PostgreSQL with Psycopg2

I have the following function defined to insert several rows with iteration in Python using Psycopg2 and PostgreSQL 11.
When I receive the same obj (with same id), I want to update its date.
def insert_execute_values_iterator(
connection,
objs: Iterator[Dict[str, Any]],
page_size: int = 1000,
) -> None:
with connection.cursor() as cursor:
try:
psycopg2.extras.execute_values(cursor, """
INSERT INTO objs(\
id,\
date,\
) VALUES %s \
ON CONFLICT (id) \
DO UPDATE SET (date) = (EXCLUDED.date) \
""", ((
obj['id'],
obj['date'],
) for obj in objs), page_size=page_size)
except (Exception, Error) as error:
print("Error while inserting as in database", error)
When a conflict happens on the unique primary key of the table while inserting an element, I get the error:
Error while inserting as in database ON CONFLICT DO UPDATE command
cannot affect row a second time
HINT: Ensure that no rows proposed for insertion within the same command have duplicate constrained values.
FYI, the clause works on PostgreSQL directly but not from the Python code.
Use unique VALUE-combinations in your INSERT statement:
create table foo(id int primary key, date date);
This should work:
INSERT INTO foo(id, date)
VALUES(1,'2021-02-17')
ON CONFLICT(id)
DO UPDATE SET date = excluded.date;
This one won't:
INSERT INTO foo(id, date)
VALUES(1,'2021-02-17') , (1, '2021-02-16') -- 2 conflicting rows
ON CONFLICT(id)
DO UPDATE SET date = excluded.date;
DEMO
You can fix this by using DISTINCT ON() in a SELECT statement:
INSERT INTO foo(id, date)
SELECT DISTINCT ON(id) id, date
FROM (VALUES(1,CAST('2021-02-17' AS date)) , (1, '2021-02-16')) s(id, date)
ORDER BY id, date ASC
ON CONFLICT(id)
DO UPDATE SET date = excluded.date;

Need help using a PySimpleGUI TABLE with Sqlite3

I'm trying to delete a row from my pysimplegui table that will also delete the same row data from my sqlite3 database. Using events, I've tried to use the index eg. -TABLE- {'-TABLE-': [1]} to index the row position using values['-TABLE-'] like so:
if event == 'Delete':
row_index = 0
for num in values['-TABLE-']:
row_index = num + 1
c.execute('DELETE FROM goals WHERE item_id = ?', (row_index,))
conn.commit()
window.Element('-TABLE-').Update(values=get_table_data())
I realized that this wouldn't work since I'm using a ROW_ID in my database that Auto-increments with every new row of data and stays fixed like so (this is just to show how my database is set up):
conn = sqlite3.connect('goals.db')
c = conn.cursor()
c.execute('''CREATE TABLE goals (item_id INTEGER PRIMARY KEY, goal_name text, goal_type text)''')
conn.commit()
conn.close()
Is there a way to use the index ( values['-TABLE-'] ) to find the data inside the selected row in pysimplegui and then using the selected row's data to find the row in my sqlite3 database to delete it, or is there any other way of doing this that I'm not aware of?
////////////////////////////////////////
FIX:
Upon more reading into the docs I discovered a .get() method. This method returns a nested list of all Table Rows, the method is callable on the element of '-TABLE-'. Using values['-TABLE-'] I can also find the row index and use the .get() method to index the specific list where the Data lays which I want to delete.
Here is the edited code that made it work for me:
if event == 'Delete':
row_index = 0
for num in values['-TABLE-']:
row_index = num
# Returns nested list of all Table rows
all_table_vals = window.element('-TABLE-').get()
# Index the selected row
object_name_deletion = all_table_vals[row_index]
# [0] to Index the goal_name of my selected Row
selected_goal_name = object_name_deletion[0]
c.execute('DELETE FROM goals WHERE goal_name = ?', (selected_goal_name,))
conn.commit()
window.Element('-TABLE-').Update(values=get_table_data())
Here is a small example to delete a row from table
import sqlite3
def deleteRecord():
try:
sqliteConnection = sqlite3.connect('SQLite_Python.db')
cursor = sqliteConnection.cursor()
print("Connected to SQLite")
# Deleting single record now
sql_delete_query = """DELETE from SqliteDb_developers where id = 6"""
cursor.execute(sql_delete_query)
sqliteConnection.commit()
print("Record deleted successfully ")
cursor.close()
except sqlite3.Error as error:
print("Failed to delete record from sqlite table", error)
finally:
if (sqliteConnection):
sqliteConnection.close()
print("the sqlite connection is closed")
deleteRecord()
In your case id will me the name of any column name which has unique value for every row in thetable of the database

Postgres is putting NaN in null values

I am using psycopg2 on a python script
The script parses json files, and put them in a Postgres RDS.
When a value is missing on the json file, the script supposed to put skip the specific column
(so it supposed to inert null value in the table, but instead it puts NaN)
Has anybody encountered this issue?
The part that checks if the column is empty -
if (str(df.loc[0][col]) == "" or df.loc[0][col] is None or str(df.loc[0][col]) == 'None' or str(df.loc[0][col]) == 'NaN' or str(df.loc[0][col]) == 'null'):
df.drop(col, axis=1, inplace=True)
else:
cur.execute("call mrr.add_column_to_table('{0}', '{1}');".format(table_name, col))
The insertion part -
def copy_df_to_sql(df, conn, table_name):
if len(df) > 0:
df_columns = list(df)
columns = '","'.join(df_columns) # create (col1,col2,...)
# create VALUES('%s', '%s",...) one '%s' per column
values = "VALUES({})".format(",".join(["%s" for _ in df_columns]))
# create INSERT INTO table (columns) VALUES('%s',...)
emp = '"'
insert_stmt = 'INSERT INTO mrr.{} ({}{}{}) {}'.format(table_name, emp, columns, emp, values)
cur = conn.cursor()
import psycopg2.extras
psycopg2.extras.execute_batch(cur, insert_stmt, df.values)
conn.commit()
cur.close()
Ok, so the reason this is happening is probably because pandas is treating null values as NaN,
so when I insert a Dataframe into the table in inserts the null values as pandas null, which is NaN

Redshift - Passing output from SQL to different variables

I have a SQL script that extracts sale data by agent.
cur = conn.cursor()
cur.execute("""select sales_rep,to_char(sales_date,'yyyy-mm')as month,count(*) from sale""")
report = cur.fetchall()
I am trying to see if I can pass count obtained from the output to a variable (count) and month value to another variable (month_count).
Could anyone advice on this. Thanks.
Update :
Sample Output:
Sales_Rep,Month,Count
Person1,Jan,20
Person1,Feb,15
Person1,Mar,10
Person2,Jan,8
Person2,Feb,13
Person2,Mar,15
Expected Output:
count = 20,15,108,13,15
month = jan,feb,mar,jan,feb,mar
You just need a basic group by query here:
SELECT
sales_rep,
TO_CHAR(sales_date, 'yyyy-mm') AS month,
COUNT(*) AS cnt
FROM sale
GROUP BY
sales_rep,
TO_CHAR(sales_date, 'yyyy-mm');
Python code:
cur = conn.cursor()
cur.execute("""SELECT sales_rep, TO_CHAR(sales_date, 'yyyy-mm') AS month, COUNT(*) AS cnt FROM sale GROUP BY sales_rep, TO_CHAR(sales_date, 'yyyy-mm')""")
rows = cur.fetchall()
for row in rows:
print("{} " + row["month"]).format(row["cnt"])

sqlite3 update/adding data to new column

I made new column with NULL values called 'id' in table. Now I want to add data to it from list. It holds about 130k elements.
I tried with insert, it returned error:
conn = create_connection(xml_db)
cursor = conn.cursor()
with conn:
cursor.execute("ALTER TABLE xml_table ADD COLUMN id integer")
for data in ssetId:
cursor.execute("INSERT INTO xml_table(id) VALUES (?)", (data,))
conn.commit()
I also tried with update:
conn = create_connection(xml_db)
cursor = conn.cursor()
with conn:
cursor.execute("ALTER TABLE xml_table ADD COLUMN id INTEGER")
for data in ssetId:
cursor.execute("UPDATE xml_table SET ('id' = ?)", (data,))
conn.commit()
What is incorrect here ?
EDIT for clarification.
The table was already existing, filled with data. I want to add column 'id' with custom values to it.
Heres an example similar to yours which may be useful.
import sqlite3
conn = sqlite3.connect("xml.db")
cursor = conn.cursor()
with conn:
# for testing purposes, remove this or else the table gets dropped whenever the file is loaded
cursor.execute("drop table if exists xml_table")
# create table with some other field
cursor.execute("create table if not exists xml_table (other_field integer not null)")
for other_data in range(5):
cursor.execute("INSERT INTO xml_table (other_field) VALUES (?)", (other_data,))
# add id field
cursor.execute("ALTER TABLE xml_table ADD COLUMN id integer")
# make sure the table exists
res = cursor.execute("SELECT name FROM sqlite_master WHERE type='table'")
print("Table Name: {}".format(res.fetchone()[0]))
# add data to the table
for data in range(5):
cursor.execute("UPDATE xml_table SET id = ? WHERE other_field = ?", (data, data))
# if you must insert an id, you must specify a other_field value as well, since other_field must be not null
cursor.execute("insert into xml_table (id, other_field) VALUES (? ,?)", (100, 105))
# make sure data exists
res = cursor.execute("SELECT id, other_field FROM xml_table")
for id_result in res:
print(id_result)
conn.commit()
conn.close()
As I stated in the comment below, since one of your rows has a NOT NULL constraint on it, no rows can exist in the table that have that column NULL. In the example above other_field is specified NOT NULL, therefore there can be no rows that have NULL values in the column other_field. Any deviation from this would be an IntegrityError.
Output:
Table Name: xml_table
(0, 0)
(1, 1)
(2, 2)
(3, 3)
(4, 4)
(100, 105)

Resources