cx_Oracle returning zero rows - python-3.x

I am new to connecting to oracle db (version 19.3) through python 3.6. I am not getting any rows in return. Please tell me what am I missing here?
I think all connection is set up correctly because it's not showing any connection error or invalid password error. Tried using fetchall(), fetchmany(), rowcount etc. everything is returning zero. I tried printing the cursor object itself, which is working. I ran the query in my DB. I have Oracle DB 19.3 installed locally, using Oracle SQL developer for running SQL. Anything else I need to install?
import cx_Oracle
conn = cx_Oracle.connect("username", "password", "localhost/orcl")
cur = conn.cursor()
cur.execute('select * from emp')
for line in cur:
print(line)
cur.close()
conn.close()

After running
delete from emp;
your python program will display zero rows, as you report.
You may wish to INSERT one or more rows before running it.
You may also find the interface offered by sqlalchemy to be more convenient.
It offers app portability across Oracle and other DB backends.
Cf https://developer.oracle.com/dsl/prez-python-queries.html

Related

PyMySQL escaping args does not work for database names

To prevent SQL injections I do not use f-Strings or the format() for queries with dynamic content. This works fine for alle CRUD operations on tables. But when I try to create a test database with it, it does not work. My code is simple:
def test_create_and_load_dump(connection):
with connection.cursor() as cursor:
cursor.execute("CREATE OR REPLACE DATABASE %s", ("foo",))
connection is just the normal pymysql.Connection.
This code raises the following error:
pymysql.err.ProgrammingError: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near ''foo'' at line 1")
The raw query after the execution of mogrify() from PyMySQL is
'CREATE OR REPLACE DATABASE \'foo\''
So it escapes the ' characters which causes the error above.
Is this a bug in the PyMySQL library or am I doing something wrong here?

Errors when attempting to update multiple column values in a MySQL DB using multiple WHERE clause values

I'm currently getting an error when attempting to update two columns of my MySQL database using mysql.connector and python 3.6. When I execute the command below I get:
You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'order='1' WHERE (match_id='2051673' AND gametime=80 AND event_name='Pass')' at line 1
But, as far as I can tell, my command is perfectly legit. What am I doing incorrectly? Thanks!
for item in pass_list:
query = """UPDATE events SET event_key=%s AND order=%s
WHERE (match_id=%s AND gametime=%s AND event_name=%s)"""
values = (item[0],item[7],item[1],item[2],item[3])
cur.execute(query, values)
conn.commit()
conn.close()
For good or bad, order is a SQL keyword. You can put backticks around it:
`order`

Is there a way to use 'pool_reset_connection' from mysql-connector-python with MariaDB 10.4.7?

I want to change my python program from a normal connection to a Connection Pool so that the database connection doesn't get lost when no queries are sent for some hours, as well as the database not being overwhelmed with a bunch of queries at once at peak usage.
I'm using Python 3.7.4, mysql-connector-python 8.0.17, and MariaDB 10.4.7.
It works fine when I use a normal connection, but MariaDB apparently doesn't support the pool_reset_session setting of mysql.connector.pooling.MySQLConnectionPool
At the start of my code, it tries to create the database if it doesn't yet exist, and that is causing the errors I get.
import mysql.connector as mariadb
from mysql.connector import errorcode
from mysql.connector import pooling
cnx = mariadb.pooling.MySQLConnectionPool(user='root', password='password', host='localhost',
pool_name='connectionpool', pool_size=10, pool_reset_session=True)
try:
db = cnx.get_connection()
cursor = db.cursor()
cursor.execute("CREATE DATABASE IF NOT EXISTS tests")
print("Created database")
except mariadb.Error as err:
print(f"Failed creating database: {err}")
finally:
print("Finally (create)")
db.close()
I expected that this snippet would just create the database tests but instead I got the following two errors:
mysql.connector.errors.NotSupportedError: MySQL version 5.7.2 and earlier does not support COM_RESET_CONNECTION.
as well as
mysql.connector.errors.OperationalError: 1047 (08S01): Unknown command
From the traceback logs, it looks like this gets caused by trying to execute db.close() in line 17.
Full output with traceback:
https://pastebin.com/H3SAvA9N
I am asking what I can do to fix this and if it's possible at all to use this sort of connection pooling with MariaDB 10.4.7 (I get confused because it says that MySQL <= 5.7.2 doesn't support this reset of connections after use even though I use MariaDB 10.4.7)
I also found out that MariaDB Connector/J does provide such an option, called useResetConnection but I don't want to learn Java just for this feature.
I was facing the same issue with mysql-connector-python for mariaDB, and I downgraded mysql-connector-python version to 8.0.12 and it worked for me
As #Georg Richter indicate, MariaDB for historical reason return a version like
"5.5.5-10.4.10-MariaDB-1:10.4.10+maria~bionic-log"
MySQL python connector check explicitly version (https://github.com/mysql/mysql-connector-python/blob/b034f25ec8037f5d60015bf2ed4ee278ec12fd17/lib/mysql/connector/connection.py#L1157) and since MariaDB server appears as version 5.5.5, an error is thrown.
Since MariaDB 10.2.6, you can explicitly add version to the [server] section of the cnf file.
With configuration like:
[server]
. . .
version=5.7.99-10.4.10-MariaDB
Connector will see version 5.7.99, and behave accordingly.
When bumping the server version to 10.0, MariaDB had to add a prefix to the server version to avoid breaking replication (replication protocol expects a one digit major version number, for more information check this answer).
No matter if you use MariaDB 10.0 or 10.4, MySQL Connector/Python will always return version number 5.5.5:
>>> conn= mysql.connector.connect(user="foo")
>>> print(conn.get_server_version())
(5, 5, 5)
>>> cursor=conn.cursor()
>>> cursor.execute("select version()")
>>> row=cursor.fetchone()
>>> print(row)
('10.4.7-MariaDB-log',)
COM_RESET_CONNECTION which resets the connection on server side was introduced in MariaDB 10.2, so to make it work you have to change the code of MySQL Connector/Python, e.g. in _check_server_version (abstracts.py):
+ if server_version.startswith("5.5.5-")
+ server_version= server_version[6:]
This is of course not a generic solution, since it will not work for MariaDB versions prior 10.2. It might have also bad side effects when checking for certain features like X-Protocol, which isn't supported by MariaDB.

How to to load a Pandas Data frame to Redshfit Server using OBDC driver?

I am relatively new to Python programming. I have searched previously answered questions related to this thoroughly but couldn't find a good solution.
Problem: I am intending to use the ODBC driver installed in my system to connect to Red-shift database. All entities - (Servername, host, port, Username, and password) are configured in the DSN. I was successfully able to make a connection to the database and read the table using the following code:
import pyodbc
import pandas as pd
conn = pyodbc.connect('DSN=AWSDW')
Query = """select *
from <table_name>
limit 10"""
df2 = pd.read_sql(Query,conn)
But the problem is I can get to load this dataframe in Redshift. Below is the code that I am trying to run:
engine = sqlalchemy.create_engine('postgresql+pyodbc://AWSDW')
df2.to_sql('Abhi_Testing_Python_2'
,engine
,schema='sandbox'
,index=False
,if_exists = 'replace')
I know there is something that need to be done in Connection string for create engine. But just don't know what?
I am open to using some other method as long as I don't have to hard code my username and password in the code.
I found that you can't use postgresql dialect with pyodbc driver.
https://www.codepowered.com/manuals/SQLAlchemy-0.6.9-doc/html/core/engines.html
So, I ended up not using the Amazon Driver which I installed. Used psycopg2 instead.
connection_string = 'postgresql+psycopg2://'+username+':'+password+'#'+HOST+':'+str(PORT)+'/'+DATABASE
engine = create_engine(connection_string)
This works. Only drawback was I had to hard code the HOST name in my code.

Unable to read column types from amazon redshift using psycopg2

I'm trying to access the types of columns in a table in redshift using psycopg2.
I'm doing this by running a simple query on pg_table_def like as follows:
SELECT * FROM pg_table_def;
This returns the traceback:
psycopg2.NotSupportedError: Column "schemaname" has unsupported type "name"
So it seems like the types of the columns that store schema (and other similar information on further queries) are not supported by psycopg2.
Has anyone run into this issue or a similar one and is aware of a workaround? My primary goal in this is to be able to return the types of columns in the table. For the purposes of what I'm doing, I can't use another postgresql adapter.
Using:
python- 3.6.2
psycopg2- 2.7.4
pandas- 0.17.1
You could do something like below, and could return the result back to calling service.
cur.execute("select * from pg_table_def where tablename='sales'")
results = cur.fetchall()
for row in results:
print ("ColumnNanme=>"+row[2] +",DataType=>"+row[3]+",encoding=>"+row[4])
Not sure about exception, if all the permissions are fine, then, it should work fine, print something like below.
ColumnNanme=>salesid,DataType=>integer,encoding=>lzo
ColumnNanme=>commission,DataType=>numeric(8,2),encoding=>lzo
ColumnNanme=>saledate,DataType=>date,encoding=>lzo
ColumnNanme=>description,DataType=>character varying(255),encoding=>lzo

Resources