fetchall method converting Postgresql timestamptz field to different timezone - python-3.x

First question on here, so let me know if more information is needed. I am using the Python psycopg2-binary==2.7.7 package in an attempt to pull PostgreSQL 9.6.11 timestamptz fields out of a database.
With that said, the 'psycopg2' package seems to be coercing the timestamptz date-times to a different timezone than is present in the database.
For instance, the following query will return the correct offset if run in a PostgreSQL client:
SQL
SELECT row_to_json(t)
FROM (
SELECT '2019-01-24T08:24:00-05:00'::timestamptz AS tz
)t;
Result
{"tz":"2019-01-24 08:24:00-05"}
However, if I run the same query via the psycopg2.cursor.fetchall method, I get a different offset than expected/returned:
import time
import psycopg2
import logging
logger = logging.getLogger()
def getRows(query, printRows=False, **kwargs):
try:
cs = "dbname={dbname} user={dbuser} password={dbpass} host={server} port={port}".format(
**kwargs)
con = psycopg2.connect(cs)
con.set_session(readonly=True, autocommit=True)
except Exception:
logger.exception("-->>>>Something went wrong connecting to db")
return None
end = None
try:
start = time.time()
cur = con.cursor()
cur.execute(query)
rows = cur.fetchall()
if printRows:
for i in rows:
print(i)
cur.close()
con.commit()
con.close()
end = time.time()
logger.info(
"-->>>>Query took {} seconds...".format(round(end - start, 2)))
return rows
except Exception:
end = time.time()
cur.close()
con.commit()
con.close()
logger.exception("-->>>>Something went wrong with the query...")
logger.info(
"-->>>>Query took {} seconds...".format(round(end - start, 2)))
if __name__ == '__main__':
test = getRows("""SELECT row_to_json(t) AS "result"
FROM(
SELECT '2019-01-24T08:24:00-05:00'::timestamptz AS tz
)t;
""", printRows=True, **DBSECRETS)
print(test[0][0])
Result
{'tz': '2019-01-24T05:24:00-08:00'}
As seen above, the EST timezone (offset of -5)to PostgreSQL is being converted to a -08:00 offset via the psycopg2 package.
I've checked the psycopg2 documentation but could not find any conclusive examples to fix this issue. Specifically, I've checked here:
http://initd.org/psycopg/docs/cursor.html#cursor.tzinfo_factory

It turns out that the SQL Client, Dbeaver, coerces a timestamptz to the local OS timezone, which in this case is EST.
How to change DBeaver timezone / How to stop DBeaver from converting date and time
The PostgreSQL server, however, has a native timezone of Pacific time or PST. Thus, the psycopg2 package was interpreting the timestamptz correctly according to the server, i.e. PST.

Related

how to compare datetime using psycopg2 in python3?

I wish to execute the statement to delete records from a postgresql table older than 45 days in my python script as below:
Consider only the code below:
import psycopg2
from datetime import datetime
cur = conn.cursor()
mpath = None
sql1 = cur.execute(
"Delete from table1 where mdatetime < datetime.today() - interval '45 days'")
This causes the following error:
psycopg2.errors.InvalidSchemaName: schema "datetime" does not exist
LINE 1: Delete from logsearch_maillogs2 where mdatetime <
datetime.t...
How do I exactly change the format or resolve this. Do I need to convert. Saw a few posts which say that postgresql DateTime doesn't exist in PostgreSQL etc, but didn't find exact code to resolve this issue. Please guide.
The query is running in Postgres not Python you need to use SQL timestamp function not Python ones if you are writing a hard coded string. So datetime.today() --> now() per Current Date/time.
sql1 = cur.execute(
"Delete from table1 where mdatetime < now() - interval '45 days'")
Or you need to use parameters per here Parameters to pass in a Python datetime if you want a dynamic query.
sql1 = cur.execute(
"Delete from table1 where mdatetime < %s - interval '45 days'", [datetime.today()])

Creating a PostgresSQL table using psycopg2 in Python

I am trying to connect to a remote PostgresSQL database using the psycopg2 library in Python. To be clear, I can already do this using psql.exe, but that is not what I want to do here. So far, I have verified that I can connect and use my cursor to perform a simple query on an existing table:
import psycopg2
conn = psycopg2.connect(dbname='mydb', user='postgres', password='mypassword', host='www.mydbserver.com', port='5432', sslmode='require')
cur = conn.cursor()
cur.execute('SELECT * FROM existing_schema.existing_table')
one = cur.fetchone()
print(one)
This essentially, connects to an existing schema and table and selecting everything. I then fetch the first row from cur and print it. Example output: ('090010100001', '09001', None, 'NO', None, 'NO'). Now, I want to create a new table using this same method. I have already created a new schema called test within mydb. My plan is to copy csv data to the table leter, but for now, I just want to create the blank table. Here's what I have tried:
cur.execute("""
CREATE TABLE test.new_table
(
region TEXT,
state TEXT,
tier TEXT,
v_detailed DOUBLE PRECISION,
v_approx DOUBLE PRECISION,
v_unmapped DOUBLE PRECISION,
v_total DOUBLE PRECISION,
a_detailed DOUBLE PRECISION,
a_approx DOUBLE PRECISION,
a_unmapped DOUBLE PRECISION,
a_total DOUBLE PRECISION
)
""")
conn.commit()
When I ran the above in a Jupyter Notebook, I assumed it would be a rather quick process. However, it seems to get stuck and just run and run (the process did not complete after 30 + mins). Eventually, it threw an error: OperationalError: server closed the connection unexpectedly. This probably means the server terminated abnormally before or while processing the request. Should it take that long to run this simple line of code?! (I'm guessing, no). What might I be doing wrong here?
OK, there is an issue with using the .copy_from() method in psycopg2. That was the issue. This is how I overcame it:
conn = psycopg2.connect(dbname='mydb', user='postgres', password='mypassword', host='www.mydbserver.com', port='5432', sslmode='require')
cur = conn.cursor()
cur.execute("""
CREATE TABLE test.new_table
(
region TEXT,
state TEXT,
tier TEXT,
v_detailed DOUBLE PRECISION,
v_approx DOUBLE PRECISION,
v_unmapped DOUBLE PRECISION,
v_total DOUBLE PRECISION,
a_detailed DOUBLE PRECISION,
a_approx DOUBLE PRECISION,
a_unmapped DOUBLE PRECISION,
a_total DOUBLE PRECISION
)
""")
conn.commit()
with open(output_file, 'r') as f:
next(f) # Skip the header row.
#You must set the search_path to the desired schema beforehand
cur.execute('SET search_path TO test, public')
tbl = 'region_report_%s'% (report_type)
cur.copy_from(f, tbl, sep=',')
conn.commit()
conn.close()

Extract large data in Postgresql using python(preferbly in dataframe format)

I have imported many large csv files into tables to my postgresql database, I know how to connect to the database with this code:
import psycopg2
try:
connection = psycopg2.connect(user = "xxx",
password = "xxx",
host = "xxx",
port = "xxx",
database = "xxx")
cursor = connection.cursor()
# Print PostgreSQL Connection properties
print ( connection.get_dsn_parameters(),"\n")
# Print PostgreSQL version
cursor.execute("SELECT version();")
record = cursor.fetchone()
print("You are connected to - ", record,"\n")
except (Exception, psycopg2.Error) as error :
print ("Error while connecting to PostgreSQL", error)
finally:
#closing database connection.
if(connection):
cursor.close()
connection.close()
print("PostgreSQL connection is closed")
But I struggle to extract data from here, is it possible to transform these tables to dataframe format, since I will be doing some ML analysis on these tables.
I'm new to Postgresql, please help me with this issue.
There are a few ways to do it.
A very simple way would be to iterate through the cursor with fetchall()
cursor.execute(query)
rows = cursor.fetchall()
data = []
for row in rows:
data.append({'field1':row[0],'field2':row[1])})
If you are using Pandas Dataframe, you could do:
rows = pd.DataFrame(rows,columns=['field1','field2'])

pymssql - SELECT works but UPDATE doesn't

import pymssql
import decimal
CONN = pymssql.connect(server='1233123123', user='s123', password='sa1231231', database='DBforTEST')
CURSOR = CONN.cursor()
"""it is good code. here is no problem"""
CURSOR.execute("SELECT ttt from test where w=2")
ROW = CURSOR.fetchone()
tmp = list()
tmp.append(ROW)
if ROW is None:
print("table has nothing")
else:
while ROW:
ROW = CURSOR.fetchone()
tmp.append(ROW)
print(tmp)
"""it works!"""
CURSOR.execute("""
UPDATE test
SET
w = 16
where ttt = 1
""")
"it doesnt works"
I'm using python 3.5 with pymssql.
In my code, SELECT state works, so I can guarantee the connection is perfect.
But the UPDATE state doesn't work in Python.
The same code works in SSMS.
What is the problem?
I guess SELECT state is only for read, so DB can provide Data, but UPDATE is modifying DB, so DB blocks it.
How can I solve it?
CONN.commit()
if autocommit is not set then you have to commit yourself.

Oracle database using Python

How to avoid creating table again and again in python using Oracle database?
Every time I call the function CREATE table query is executed and data is not inserted because the table already exists.
import cx_Oracle
import time
def Database(name,idd,contact):
try:
con = cx_Oracle.connect('arslanhaider/12345#AHS:1521/XE')
cur = con.cursor()
cur.execute("CREATE TABLE Mazdoor(Name varchar(255),EmpID INT,ContactNo INT)")
cur.execute("INSERT INTO Mazdoor VALUES(:1, :2, :3)",( name,idd,contact))
con.commit()
cur.execute("SELECT * FROM Mazdoor")
data = cur.fetchall()
for row in data:
print(row)
except cx_Oracle.Error:
if con:
con.rollback()
finally:
if con:
con.close()
if__name__="__main__"
while True:
n=input("Enter Name::")
i=input("Enter Idd::")
c=input("Enter Contact No::")
Database(n,i,c)
time.sleep(3)
print("Record Successfully Stored......\n\n")
"Obviously, (koff, koff ...) you must know what you are doing!"
If you ask Oracle to CREATE TABLE, knowing in advance that the table might already exist, then your logic should at least be prepared ... through the use of multiple try..except..finally blocks as appropriate, to handle this situation.
If the CREATE TABLE statement fails because the table already exists, then you can be quite sure that an exception will be thrown, and that you, in the relevant except clause, can determine that "this, indeed, is the reason." You might reasonably then choose to ignore this possibility, and to "soldier on."

Resources