I am trying to move from a Windows-based pyodbc (Using the SAP Adaptive Server Enterprise 16.0 driver) to Red Hat Linux 7.9-based sybpydb solution.
Current pyodbc solution:
connection = pyodbc.connect(
"Driver={Adaptive Server Enterprise};NetworkAddress=<servername,serverport>;
Database=<database>;UID={<username>};PWD={<password>};#pool_size=10;
stmtquery_timeout=1200;#login_timeout=30;#connection_timeout=30")
df = pandas.read_sql_query("exec <storedproc_name>")
connection.close()
I am trying to replicate this under linux using the sybclient-16.0.3-2 package.
import sybpydb
connection = sybpydb.connect(user=username, password=password, servername=servername,
dsn="HostName=<hostname>;Database=<database>;LoginTimeout=30;Timeout=30")
curr = connection.cursor()
result = cursor.execute("exec <storedproc_name>")
Passing #smtmquery_timeout=1200 causes the connection to fail. But without this, the call to the stored proc will timeout. I can't see anything in the documentation about this.
Thanks in advance
Please refer to the document:
https://help.sap.com/docs/SAP_ASE_SDK/a1576559612d4e39886fc0ad4e093074/b0fd2586bbf910148c6ac638f6594153.html
There is no attribute: smtmquery_timeout
If you are using "sybpydb", you can use the openclient sdk directly instead of the ODBC style configuration for the connection.
Related
I have trouble connecting to the Azure postgres database from python. I am following the guide here - https://learn.microsoft.com/cs-cz/azure/postgresql/connect-python
I have basically the same code for setting up the connection.
But the psycopg2 and SQLalchemy throw me the same error:
OperationalError: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
I am able to connect to the instance by other client tools like dbeaver but from python it does not work.
When I investigate in Postgres logs I can see that the server actually authorized the connection but the next line says
could not receive data from client: An existing connection was forcibly closed by the remote host.
Python is 3.7
psycopg's version is 2.8.5
Azure Postgres region is in West Europe
Does someone has any suggestion on what should I try to make it work?
Thank you!
EDIT:
The issue resolved itself. I tried the same setup a few days later and it started working. Might have been something wrong with the Azure West Europe.
I had this issue too. I think I read somewhere (I forget where) that Azure has an issue with the # you have to for the username (user#serverName).
I created variables and an f-string and then it worked OK.
import sqlalchemy
username = 'user#server_name'
password = 'PassWord!'
host = 'server_name.postgres.database.azure.com'
database = 'your_database'
conn_str = f'postgresql+psycopg2://{username}:{password}#{host}/{database}'
After that:
engine = sqlalchemy.create_engine(conn_str, pool_pre_ping=True)
conn = engine.connect()
Test it with a simple SQL statement.
sql = 'SELECT * FROM public.some_table;'
results = conn.engine.execute(sql)
This was a connection in UK South. Before that it did complain about the format of the username having to use #, although the username was correct, as tested from the command line with PSQL and another SQL client.
I am pretty much new to this concept.
Currently, I am connecting to oracle database directly using the credentials within a python script but I want to store the connection string in a text file or a separate .py file and call them as I use the same credentials for multiple scripts.
I am using cx_oracle and sqlalchemy packages to connect to two databases as I extract data from the source and push it to the target(both the source and target are oracle databases).
import cx_Oracle
from sqlalchemy import create_engine, types
dsn_tns = cx_Oracle.makedsn('shost', 'sport', service_name='sservice_name')
conn = cx_Oracle.connect(user='suser', password='spw', dsn = dsn_tns)
engine = create_engine('oracle://tuser:tpw#thost:tport/tservice_name')
I'd really want to automate this process as I reuse the same connection string in multiple scripts and I'd really appreciate if I can get some help.
We are connecting to oracle from python using cx_oracle package.
But the user_id, password and SID details are hardcoded in that.
My question is, is there any way to create a Datasource kind of thing? Or how we will deploy such python script sin production?
The database is in a Linux box and python is installed in another Linux box(Weblogic server is also installed in this Linux box).
import cx_Oracle
con = cx_Oracle.connect('pythonhol/welcome#127.0.0.1/orcl')
print con.version
Expectation is :
Can we deploy python in a production instance?
If yes how can we connect to the database by hiding the DB credentials?
Use some kind of 'external authentication', for example a wallet. See the cx_Oracle documentation https://cx-oracle.readthedocs.io/en/latest/user_guide/connection_handling.html#connecting-using-external-authentication
In summary:
create a wallet with mkstore which contains the username/password credentials.
copy the wallet to the machines that are running Python
make sure no bad people can access the wallet
configure Oracle Net files to point to the wallet
your scripts would connect like
connection = cx_Oracle.connect(dsn="mynetalias", encoding="UTF-8")
or
pool = cx_Oracle.SessionPool(externalauth=True, homogeneous=False, dsn="mynetalias",
encoding="UTF-8")
pool.acquire()
I am trying to get data from a Microsoft Access Database. The issue is the number of constraints I'm under:
I have to use 64 bit Python
The access database is made up of linked tables to a different database
The other database requires a 32 bit Oracle driver
Therefore, I have the Access Database stored locally and am trying to connect to that using PyODBC.
I've tried looking around and messing with the connection string but this problem seems pretty unique.
This is currently a modified version of what I have:
import pyodbc
dbPATH = r'C:\path\to\database.accdb'
UID = 'username'
PWD = 'username'
driver = r'DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};'
credentials = r'DBQ=%s;UID=%s;PWD=%s'%(dbPATH, UID, PWD)
conn_str = driver + credentials
connection = pyodbc.connect(conn_str)
cursor = connection.cursor()
cursor.execute("select * from [table_name];")
for row in cursor.fetchone():
print(row)
This is the error I typically get:
pyodbc.Error: ('HY000', "[HY000] [Microsoft][ODBC Microsoft Access Driver] ODBC--connection to '{Oracle in OraClient11g_home1_32bit}' failed. (-2001) (SQLExecDirectW)")
Which is surprising since this is the driver and connection that the access database uses to connect to the other data source.
I have tried setting
pyodbc.pooling = False
but that did not change anything.
You have two constraints that are mutually exclusive:
I have to use 64 bit Python
... and ...
The other database requires a 32 bit Oracle driver
64-bit processes cannot use 32-bit ODBC drivers; they are simply not compatible. If the rest of your stack (Oracle ODBC driver, Microsoft Office/Access application(s)) is 32-bit then you will need to use a 32-bit version of Python if you want to work with the linked tables as you've described.
Additional Note: Your statement that "The other database requires a 32 bit Oracle driver" is dubious. Client-server databases like Oracle, SQL Server, etc., don't particularly care if they receive requests from a 32-bit client or a 64-bit client. There may be differences in the details on the client side, but that's for the ODBC Driver (and/or the ODBC Driver Manager) on the client to figure out.
I'm trying to connect to an Oracle Database from an Excel application and whithout a DNS. I found on a website that it's possible to use ADO, so this is why I tried to do. I'm new to this so I juste copied what I found on this website.
Here is my code so far :
Sub ADOtest()
Dim connection As New ADODB.connection
connection.ConnectionString = "UID = user1; PWD= my_pwd; DRIVER = {Microsoft ODBC for Oracle; Server= localhost; Database= orcl.my_domain;"
connection.Open
End sub
When I run this code, I get an error saying that the driver was not found.
The problem is that I have no idea of what I have to do with the driver (how to install it and configure it). Plus, I don't know which one I should use : I've read that there is a driver from Microsoft, another one from Oracle and also I've seen something about providers like msdaora.
The program will be used by many users, so I would like to choose the solution that is the lightest (not much to install on computers).
Thank you !
For COM based ADO (ADODB) you can use the OLE DB Providers.
One is from Oracle, called "Oracle Provider for OLE DB". You can download it from 32-bit Oracle Data Access Components (ODAC) and NuGet Downloads (assuming your Excel is 32-bit). The connection string would be
"Provider=OraOLEDB.Oracle;Data Source=orcl;User ID=myUsername;Password=myPassword"
The other one is from Microsoft. Please note, this provider is deprecated, you should not use it for new projects. Usually it should be available on your Windows. Be aware, like the provider from Oracle it also requires an Oracle Client to be installed on the PC! The connection string would be
"Provider=MSDAORA;Data Source=orcl;User ID=myUsername;Password=myPassword"
The data source is usually defined in tnsnames.ora file or at a LDAP server, for example:
orcl.my_domain =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(Host = localhost)(Port = 1521))
(CONNECT_DATA =
(SERVICE_NAME = orcl)
)
)
If you don't have such entry you can put everything into the connection string, e.g.
"Provider=OraOLEDB.Oracle;Data Source=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(Host=localhost)(Port=1521))(CONNECT_DATA=(SERVICE_NAME=orcl)));User ID=myUsername;Password=myPassword"
Perhaps you have to enclose the data source value by double-quotes ("), I am not sure.
So, in any case you would have to install an Oracle Client at all PC's.
Where is your database server hosted? In your question you say Server=localhost;, this would be quite unlikely, i.e. it is in contradiction to The program will be used by many users. I doubt everybody has an Oracle Database server installed on his local host.