How to connect to a SQL Server database running on Amazon Lightsail using Python? - python-3.x

import pyodbc
import numpy
import pandas as pd
import pypyodbc
def sql_conn():
conn = pyodbc.connect("Driver={ODBC Driver 17 for SQL Server};"
"Server=;"
"Database=db_name;"
"uid=xxxx;pwd=xxxx;")
cursor = conn.cursor()
cursor.execute('SELECT * FROM dbo.ImageDB')
for row in cursor:
print(row)
It's running on a Windows Server 2012 R2.
Whenever I run the python script I keep getting the message
Process finished with exit code 0
I know the connection wasn't made. How do I get the server name? Is it a combination of the server IP and the SQL Server name? Do I need to provide a PORT number? I tried a bunch of combinations for the server name but all give me the same output.
Also which is better pyodbc or pypyodbc?
I am sorry if this sounds like a stupid question but I am really new to this and any help would be appreciated.
Thanks.

Related

load pandas dataframe into Redshift

I am trying to load the pandas dataframe into Redshift, but it keeps giving me an error. Please guide me on the same. Need help on correcting the cluster configuration to make it work successfully.
Below is my code and error traceback:
from sqlalchemy import create_engine
import pyodbc
import psycopg2
username = "#####"
host = "redshift-cluster-****.*****.ap-south-1.redshift.amazonaws.com"
driver = "Amazon Redshift (x64)"
port = 5439
pwd = "******"
db = "dev"
table = "tablename"
rs_engine = create_engine(f"postgresql://{username}:{pwd}#{host}:{port}/{db}")
df.to_sql(table, con=rs_engine, if_exists='replace',index=False)
Traceback:
OperationalError: (psycopg2.OperationalError) connection to server at "redshift-cluster-****.****.ap-south-1.redshift.amazonaws.com" (3.109.77.136), port 5439 failed: Connection timed out (0x0000274C/10060)
Is the server running on that host and accepting TCP/IP connections?
Even tried the below options, but getting the same error,
rs_engine = create_engine(f"redshift+psycopg2://{username}#{host}:{port}/{db}")
rs_engine = create_engine(f"postgresql+psycopg2://{username}:{pwd}#{host}:{port}/{db}")
rs_engine = redshift_connector.connect(
host='redshift-cluster-####.****.ap-south-1.redshift.amazonaws.com',
database='dev',
user='****',
password='#####'
)
Also, have the Public accessible setting Enabled in Redshift cluster. Still unable to connect and load the data.
UPDATE:
Also tried using ODBC Driver, but getting the same error,
import pyodbc
cnxn = pyodbc.connect(Driver=driver,
Server=host,
Database=db,
UID=username,PWD=pwd,Port=port)
When tried to setup using ODBC Datasources app, getting the same error,

SAP ASE Extension Python Module: #stmt_query_timeout?

I am trying to move from a Windows-based pyodbc (Using the SAP Adaptive Server Enterprise 16.0 driver) to Red Hat Linux 7.9-based sybpydb solution.
Current pyodbc solution:
connection = pyodbc.connect(
"Driver={Adaptive Server Enterprise};NetworkAddress=<servername,serverport>;
Database=<database>;UID={<username>};PWD={<password>};#pool_size=10;
stmtquery_timeout=1200;#login_timeout=30;#connection_timeout=30")
df = pandas.read_sql_query("exec <storedproc_name>")
connection.close()
I am trying to replicate this under linux using the sybclient-16.0.3-2 package.
import sybpydb
connection = sybpydb.connect(user=username, password=password, servername=servername,
dsn="HostName=<hostname>;Database=<database>;LoginTimeout=30;Timeout=30")
curr = connection.cursor()
result = cursor.execute("exec <storedproc_name>")
Passing #smtmquery_timeout=1200 causes the connection to fail. But without this, the call to the stored proc will timeout. I can't see anything in the documentation about this.
Thanks in advance
Please refer to the document:
https://help.sap.com/docs/SAP_ASE_SDK/a1576559612d4e39886fc0ad4e093074/b0fd2586bbf910148c6ac638f6594153.html
There is no attribute: smtmquery_timeout
If you are using "sybpydb", you can use the openclient sdk directly instead of the ODBC style configuration for the connection.

The exe file of python app creates the database but doesn't create tables in some PCs

I have implemented an application using python, PostgreSQL, pyqt5, Sqlalchemy and made an execution file for it through pyInstaller.
I tried installing this app on some laptops which had PostgreSQL 13 installed already on them. But there is a problem.
In some Laptops it's Ok and everything runs successfully and the database is created along with its tables on PostgreSQL, we can check it through Pgadmin 4 and we can work with the application successfully, but in some other laptops the database is created but not its tables and so the console stops and nothing appears and when we check Pgadmin there only is the database name not its tables.
P.S: the systems are Windows 10 and Windows 7.
I have no idea what to check or what to do I would appreciate it if anyone can give me any ideas.
the following code is base.py:
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from sqlalchemy_utils import database_exists, create_database
from sqlalchemy import Column
engine = create_engine('postgresql://postgres:123#localhost:5432/db23')
if not database_exists(engine.url):
create_database(engine.url)
Session = sessionmaker(bind=engine)
Base = declarative_base()
and the following function is called in initializer function of the app:
def the_first_time_caller(self):
session = Session()
# 2 - generate database schema
Base.metadata.create_all(engine) # create tables in the database
session.commit()
session.close()
with Updating Python and downgrading Sqlalchemy, It runs successfully now.

Connecting to Azure PostgreSQL server from python psycopg2 client

I have trouble connecting to the Azure postgres database from python. I am following the guide here - https://learn.microsoft.com/cs-cz/azure/postgresql/connect-python
I have basically the same code for setting up the connection.
But the psycopg2 and SQLalchemy throw me the same error:
OperationalError: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
I am able to connect to the instance by other client tools like dbeaver but from python it does not work.
When I investigate in Postgres logs I can see that the server actually authorized the connection but the next line says
could not receive data from client: An existing connection was forcibly closed by the remote host.
Python is 3.7
psycopg's version is 2.8.5
Azure Postgres region is in West Europe
Does someone has any suggestion on what should I try to make it work?
Thank you!
EDIT:
The issue resolved itself. I tried the same setup a few days later and it started working. Might have been something wrong with the Azure West Europe.
I had this issue too. I think I read somewhere (I forget where) that Azure has an issue with the # you have to for the username (user#serverName).
I created variables and an f-string and then it worked OK.
import sqlalchemy
username = 'user#server_name'
password = 'PassWord!'
host = 'server_name.postgres.database.azure.com'
database = 'your_database'
conn_str = f'postgresql+psycopg2://{username}:{password}#{host}/{database}'
After that:
engine = sqlalchemy.create_engine(conn_str, pool_pre_ping=True)
conn = engine.connect()
Test it with a simple SQL statement.
sql = 'SELECT * FROM public.some_table;'
results = conn.engine.execute(sql)
This was a connection in UK South. Before that it did complain about the format of the username having to use #, although the username was correct, as tested from the command line with PSQL and another SQL client.

connecting to oracle db from python using a config file

I am pretty much new to this concept.
Currently, I am connecting to oracle database directly using the credentials within a python script but I want to store the connection string in a text file or a separate .py file and call them as I use the same credentials for multiple scripts.
I am using cx_oracle and sqlalchemy packages to connect to two databases as I extract data from the source and push it to the target(both the source and target are oracle databases).
import cx_Oracle
from sqlalchemy import create_engine, types
dsn_tns = cx_Oracle.makedsn('shost', 'sport', service_name='sservice_name')
conn = cx_Oracle.connect(user='suser', password='spw', dsn = dsn_tns)
engine = create_engine('oracle://tuser:tpw#thost:tport/tservice_name')
I'd really want to automate this process as I reuse the same connection string in multiple scripts and I'd really appreciate if I can get some help.

Resources