pymssql - SELECT works but UPDATE doesn't - pymssql

import pymssql
import decimal
CONN = pymssql.connect(server='1233123123', user='s123', password='sa1231231', database='DBforTEST')
CURSOR = CONN.cursor()
"""it is good code. here is no problem"""
CURSOR.execute("SELECT ttt from test where w=2")
ROW = CURSOR.fetchone()
tmp = list()
tmp.append(ROW)
if ROW is None:
print("table has nothing")
else:
while ROW:
ROW = CURSOR.fetchone()
tmp.append(ROW)
print(tmp)
"""it works!"""
CURSOR.execute("""
UPDATE test
SET
w = 16
where ttt = 1
""")
"it doesnt works"
I'm using python 3.5 with pymssql.
In my code, SELECT state works, so I can guarantee the connection is perfect.
But the UPDATE state doesn't work in Python.
The same code works in SSMS.
What is the problem?
I guess SELECT state is only for read, so DB can provide Data, but UPDATE is modifying DB, so DB blocks it.
How can I solve it?

CONN.commit()
if autocommit is not set then you have to commit yourself.

Related

Need help using a PySimpleGUI TABLE with Sqlite3

I'm trying to delete a row from my pysimplegui table that will also delete the same row data from my sqlite3 database. Using events, I've tried to use the index eg. -TABLE- {'-TABLE-': [1]} to index the row position using values['-TABLE-'] like so:
if event == 'Delete':
row_index = 0
for num in values['-TABLE-']:
row_index = num + 1
c.execute('DELETE FROM goals WHERE item_id = ?', (row_index,))
conn.commit()
window.Element('-TABLE-').Update(values=get_table_data())
I realized that this wouldn't work since I'm using a ROW_ID in my database that Auto-increments with every new row of data and stays fixed like so (this is just to show how my database is set up):
conn = sqlite3.connect('goals.db')
c = conn.cursor()
c.execute('''CREATE TABLE goals (item_id INTEGER PRIMARY KEY, goal_name text, goal_type text)''')
conn.commit()
conn.close()
Is there a way to use the index ( values['-TABLE-'] ) to find the data inside the selected row in pysimplegui and then using the selected row's data to find the row in my sqlite3 database to delete it, or is there any other way of doing this that I'm not aware of?
////////////////////////////////////////
FIX:
Upon more reading into the docs I discovered a .get() method. This method returns a nested list of all Table Rows, the method is callable on the element of '-TABLE-'. Using values['-TABLE-'] I can also find the row index and use the .get() method to index the specific list where the Data lays which I want to delete.
Here is the edited code that made it work for me:
if event == 'Delete':
row_index = 0
for num in values['-TABLE-']:
row_index = num
# Returns nested list of all Table rows
all_table_vals = window.element('-TABLE-').get()
# Index the selected row
object_name_deletion = all_table_vals[row_index]
# [0] to Index the goal_name of my selected Row
selected_goal_name = object_name_deletion[0]
c.execute('DELETE FROM goals WHERE goal_name = ?', (selected_goal_name,))
conn.commit()
window.Element('-TABLE-').Update(values=get_table_data())
Here is a small example to delete a row from table
import sqlite3
def deleteRecord():
try:
sqliteConnection = sqlite3.connect('SQLite_Python.db')
cursor = sqliteConnection.cursor()
print("Connected to SQLite")
# Deleting single record now
sql_delete_query = """DELETE from SqliteDb_developers where id = 6"""
cursor.execute(sql_delete_query)
sqliteConnection.commit()
print("Record deleted successfully ")
cursor.close()
except sqlite3.Error as error:
print("Failed to delete record from sqlite table", error)
finally:
if (sqliteConnection):
sqliteConnection.close()
print("the sqlite connection is closed")
deleteRecord()
In your case id will me the name of any column name which has unique value for every row in thetable of the database

After integer greater than 9 Incorrect number of bindings supplied. The current statement uses 1, and there are 2 supplied

def delete_Link(id):
connection = sql_Connect()
cursor = connection.cursor()
cursor.execute("DELETE FROM table WHERE id =?", str(id))
connection.commit()
After iterating over rows once the table id is greater than 9 I receive the following error
sqlite3.ProgrammingError: Incorrect number of bindings supplied. The current statement uses 1, and there are 2 supplied.
Change str(id) to (str(id), ) like in this example:
import sqlite3
def delete_link(id):
connection = sqlite3.connect('test.db')
cursor = connection.cursor()
cursor.execute("DELETE FROM t WHERE id =?", (str(id),))
connection.commit()
print('Deleted ' + str(id))
if __name__ == '__main__':
for id in range(1, 11):
delete_link(id)
Check out some examples of .execute() on https://docs.python.org/2/library/sqlite3.html. It shows how you can send a tuple to that method in parameterized query.
def delete_Link(id):
connection = sql_Connect()
cursor = connection.cursor()
cursor.execute("DELETE FROM table WHERE id =?", [id])
connection.commit()
Changed the original snippet to include [] instead of () which seems to have fixed the issue. Hope this helps some others.

fetchall method converting Postgresql timestamptz field to different timezone

First question on here, so let me know if more information is needed. I am using the Python psycopg2-binary==2.7.7 package in an attempt to pull PostgreSQL 9.6.11 timestamptz fields out of a database.
With that said, the 'psycopg2' package seems to be coercing the timestamptz date-times to a different timezone than is present in the database.
For instance, the following query will return the correct offset if run in a PostgreSQL client:
SQL
SELECT row_to_json(t)
FROM (
SELECT '2019-01-24T08:24:00-05:00'::timestamptz AS tz
)t;
Result
{"tz":"2019-01-24 08:24:00-05"}
However, if I run the same query via the psycopg2.cursor.fetchall method, I get a different offset than expected/returned:
import time
import psycopg2
import logging
logger = logging.getLogger()
def getRows(query, printRows=False, **kwargs):
try:
cs = "dbname={dbname} user={dbuser} password={dbpass} host={server} port={port}".format(
**kwargs)
con = psycopg2.connect(cs)
con.set_session(readonly=True, autocommit=True)
except Exception:
logger.exception("-->>>>Something went wrong connecting to db")
return None
end = None
try:
start = time.time()
cur = con.cursor()
cur.execute(query)
rows = cur.fetchall()
if printRows:
for i in rows:
print(i)
cur.close()
con.commit()
con.close()
end = time.time()
logger.info(
"-->>>>Query took {} seconds...".format(round(end - start, 2)))
return rows
except Exception:
end = time.time()
cur.close()
con.commit()
con.close()
logger.exception("-->>>>Something went wrong with the query...")
logger.info(
"-->>>>Query took {} seconds...".format(round(end - start, 2)))
if __name__ == '__main__':
test = getRows("""SELECT row_to_json(t) AS "result"
FROM(
SELECT '2019-01-24T08:24:00-05:00'::timestamptz AS tz
)t;
""", printRows=True, **DBSECRETS)
print(test[0][0])
Result
{'tz': '2019-01-24T05:24:00-08:00'}
As seen above, the EST timezone (offset of -5)to PostgreSQL is being converted to a -08:00 offset via the psycopg2 package.
I've checked the psycopg2 documentation but could not find any conclusive examples to fix this issue. Specifically, I've checked here:
http://initd.org/psycopg/docs/cursor.html#cursor.tzinfo_factory
It turns out that the SQL Client, Dbeaver, coerces a timestamptz to the local OS timezone, which in this case is EST.
How to change DBeaver timezone / How to stop DBeaver from converting date and time
The PostgreSQL server, however, has a native timezone of Pacific time or PST. Thus, the psycopg2 package was interpreting the timestamptz correctly according to the server, i.e. PST.

Checking if python sqlite table is populated

Here is my code:
dbContent = cursor.execute("SELECT COUNT(*) FROM parse")
if dbContent is None:
# This should run the nested code if it worked.
Instead it runs the else statement which is not what should be happening.
I am not a python expert but I think your code is just not correct. Please try this instead:
cursor.execute("SELECT COUNT(*) FROM parse")
result=cursor.fetchone()
numRows=result[0]
if numRows == 0:
# run your code
dbContent = cursor.execute("SELECT COUNT(*) FROM parse")
rows = dbContent.fetchall()
if not rows:
...
Lists have implicit booleanness which you can use.

Oracle database using Python

How to avoid creating table again and again in python using Oracle database?
Every time I call the function CREATE table query is executed and data is not inserted because the table already exists.
import cx_Oracle
import time
def Database(name,idd,contact):
try:
con = cx_Oracle.connect('arslanhaider/12345#AHS:1521/XE')
cur = con.cursor()
cur.execute("CREATE TABLE Mazdoor(Name varchar(255),EmpID INT,ContactNo INT)")
cur.execute("INSERT INTO Mazdoor VALUES(:1, :2, :3)",( name,idd,contact))
con.commit()
cur.execute("SELECT * FROM Mazdoor")
data = cur.fetchall()
for row in data:
print(row)
except cx_Oracle.Error:
if con:
con.rollback()
finally:
if con:
con.close()
if__name__="__main__"
while True:
n=input("Enter Name::")
i=input("Enter Idd::")
c=input("Enter Contact No::")
Database(n,i,c)
time.sleep(3)
print("Record Successfully Stored......\n\n")
"Obviously, (koff, koff ...) you must know what you are doing!"
If you ask Oracle to CREATE TABLE, knowing in advance that the table might already exist, then your logic should at least be prepared ... through the use of multiple try..except..finally blocks as appropriate, to handle this situation.
If the CREATE TABLE statement fails because the table already exists, then you can be quite sure that an exception will be thrown, and that you, in the relevant except clause, can determine that "this, indeed, is the reason." You might reasonably then choose to ignore this possibility, and to "soldier on."

Resources