connecting to oracle db from python using a config file - python-3.x

I am pretty much new to this concept.
Currently, I am connecting to oracle database directly using the credentials within a python script but I want to store the connection string in a text file or a separate .py file and call them as I use the same credentials for multiple scripts.
I am using cx_oracle and sqlalchemy packages to connect to two databases as I extract data from the source and push it to the target(both the source and target are oracle databases).
import cx_Oracle
from sqlalchemy import create_engine, types
dsn_tns = cx_Oracle.makedsn('shost', 'sport', service_name='sservice_name')
conn = cx_Oracle.connect(user='suser', password='spw', dsn = dsn_tns)
engine = create_engine('oracle://tuser:tpw#thost:tport/tservice_name')
I'd really want to automate this process as I reuse the same connection string in multiple scripts and I'd really appreciate if I can get some help.

Related

SAP ASE Extension Python Module: #stmt_query_timeout?

I am trying to move from a Windows-based pyodbc (Using the SAP Adaptive Server Enterprise 16.0 driver) to Red Hat Linux 7.9-based sybpydb solution.
Current pyodbc solution:
connection = pyodbc.connect(
"Driver={Adaptive Server Enterprise};NetworkAddress=<servername,serverport>;
Database=<database>;UID={<username>};PWD={<password>};#pool_size=10;
stmtquery_timeout=1200;#login_timeout=30;#connection_timeout=30")
df = pandas.read_sql_query("exec <storedproc_name>")
connection.close()
I am trying to replicate this under linux using the sybclient-16.0.3-2 package.
import sybpydb
connection = sybpydb.connect(user=username, password=password, servername=servername,
dsn="HostName=<hostname>;Database=<database>;LoginTimeout=30;Timeout=30")
curr = connection.cursor()
result = cursor.execute("exec <storedproc_name>")
Passing #smtmquery_timeout=1200 causes the connection to fail. But without this, the call to the stored proc will timeout. I can't see anything in the documentation about this.
Thanks in advance
Please refer to the document:
https://help.sap.com/docs/SAP_ASE_SDK/a1576559612d4e39886fc0ad4e093074/b0fd2586bbf910148c6ac638f6594153.html
There is no attribute: smtmquery_timeout
If you are using "sybpydb", you can use the openclient sdk directly instead of the ODBC style configuration for the connection.

IBM AS400 File not Found error while the Table Object is still there

I'm relatively new to working with remote IBM databases. I am connecting to a remote IBM AS400 database using an ODBC Python3 connector in an Anaconda virtual environment on Windows 10. My connection that successfully works is:
import pyodbc
connection = pyodbc.connect(
Driver='{iSeries Access ODBC Driver}',
System='<host>',
database='<database>',
uid='<username>',
pwd='<password>')
c1 = connection.cursor()
print('Connection established')
After connecting I run this command to see the list of tables:
c1.execute("select table_name from sysibm.sqltables")
And I see all the tables that I would need to query. But then when I try to query the contents of a specific table using:
c1.execute("select * from <database>.<table> LIMIT 100")
I get an error:
ProgrammingError: ('42S02', '[42S02] [IBM][System i Access ODBC Driver][DB2 for i5/OS]SQL0204 - <table> der Art *FILE in <database> nicht gefunden. (-204) (SQLExecDirectW)')
(It's in German, it means table of type *FILE in database not found)
(And I'm not using angle brackets, it's just for demonstration)
But a software like DBeaver returns valid data for both of them, for the tables list query and the specific table query. It's only python that gives the error.
Can anyone point out what could I be doing wrong? Or what can I run to pinpoint the problem?
<database>.<table> is not correct.
On Db2 you have
database
-schema
--table
you should be using <schema>.<table> as the database has been specified in the connection string.
<database>.<schema>.<table> is supported when connecting to a remote database from the "local" one...but only for basic selects. Db2 for IBM i doesn't support for example joining between a "local" and remote table.

The exe file of python app creates the database but doesn't create tables in some PCs

I have implemented an application using python, PostgreSQL, pyqt5, Sqlalchemy and made an execution file for it through pyInstaller.
I tried installing this app on some laptops which had PostgreSQL 13 installed already on them. But there is a problem.
In some Laptops it's Ok and everything runs successfully and the database is created along with its tables on PostgreSQL, we can check it through Pgadmin 4 and we can work with the application successfully, but in some other laptops the database is created but not its tables and so the console stops and nothing appears and when we check Pgadmin there only is the database name not its tables.
P.S: the systems are Windows 10 and Windows 7.
I have no idea what to check or what to do I would appreciate it if anyone can give me any ideas.
the following code is base.py:
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from sqlalchemy_utils import database_exists, create_database
from sqlalchemy import Column
engine = create_engine('postgresql://postgres:123#localhost:5432/db23')
if not database_exists(engine.url):
create_database(engine.url)
Session = sessionmaker(bind=engine)
Base = declarative_base()
and the following function is called in initializer function of the app:
def the_first_time_caller(self):
session = Session()
# 2 - generate database schema
Base.metadata.create_all(engine) # create tables in the database
session.commit()
session.close()
with Updating Python and downgrading Sqlalchemy, It runs successfully now.

Connecting to oracle db from python

We are connecting to oracle from python using cx_oracle package.
But the user_id, password and SID details are hardcoded in that.
My question is, is there any way to create a Datasource kind of thing? Or how we will deploy such python script sin production?
The database is in a Linux box and python is installed in another Linux box(Weblogic server is also installed in this Linux box).
import cx_Oracle
con = cx_Oracle.connect('pythonhol/welcome#127.0.0.1/orcl')
print con.version
Expectation is :
Can we deploy python in a production instance?
If yes how can we connect to the database by hiding the DB credentials?
Use some kind of 'external authentication', for example a wallet. See the cx_Oracle documentation https://cx-oracle.readthedocs.io/en/latest/user_guide/connection_handling.html#connecting-using-external-authentication
In summary:
create a wallet with mkstore which contains the username/password credentials.
copy the wallet to the machines that are running Python
make sure no bad people can access the wallet
configure Oracle Net files to point to the wallet
your scripts would connect like
connection = cx_Oracle.connect(dsn="mynetalias", encoding="UTF-8")
or
pool = cx_Oracle.SessionPool(externalauth=True, homogeneous=False, dsn="mynetalias",
encoding="UTF-8")
pool.acquire()

Flask SQLalchemy can't connect to Google Cloud Postgresql database with Unix socket

I am using Flask SQLalchemy in my google app engine standard environment project to try and connect to my GCP Postgresql database..
According to google docs, the url can be created in this format
# postgres+pg8000://<db_user>:<db_pass>#/<db_name>?unix_socket=/cloudsql/<cloud_sql_instance_name>
and below is my code
from flask import Flask, request, jsonify
import constants
app = Flask(__name__)
# Database configuration from GCP postgres+pg8000
DB_URL = 'postgres+pg8000://{user}:{pw}#/{db}?unix_socket=/cloudsql/{instance_name}'.format(user=user,pw=password,db=dbname, instance_name=instance_name)
app.config['SQLALCHEMY_DATABASE_URI'] = DB_URL
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False # silence the
deprecation warning
sqldb = SQLAlchemy(app)
This is the error i keep getting:
File "/env/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 412, in connect return self.dbapi.connect(*cargs, **cparams) TypeError: connect() got an unexpected keyword argument 'unix_socket'
The argument to specify a unix socket varies depending on what driver you use. According to the pg8000 docs, you need to use unix_sock instead of unix_socket.
To see this in the context of an application, you can take a look at this sample application.
It's been more than 1.5 years and no one has posted the solution yet :)
Anyway, just use the below URI
postgres+psycopg2://<db_user>:<db_pass>#<public_ip>/<db_name>?host=/cloudsql/<cloud_sql_instance_name>
And yes, don't forget to add your systems public IP address to the authorized network.
Example of docs
As you can read in the gcloud guides, an examplary connection string is
postgres+pg8000://<db_user>:<db_pass>#/<db_name>?unix_sock=<socket_path>/<cloud_sql_instance_name>/.s.PGSQL.5432
Varying engine and socket part
Be aware that the engine part postgres+pg8000 varies depending on your database and used driver. Also, depending on your database client library, the socket part ?unix_sock=<socket_path>/<cloud_sql_instance_name>/.s.PGSQL.5432 may be needed or can be omitted, as per:
Note: The PostgreSQL standard requires a .s.PGSQL.5432 suffix in the socket path. Some libraries apply this suffix automatically, but others require you to specify the socket path as follows: /cloudsql/INSTANCE_CONNECTION_NAME/.s.PGSQL.5432.
PostgreSQL and flask_sqlalchemy
For instance, I am using PostgreSQL with flask_sqlalchemy as database client and pg8000 as driver and my working connection string is only postgres+pg8000://<db_user>:<db_pass>#/<db_name>.

Resources