How to specify tls version 'TLSv1.2' when connecting to MySQL using Python peewee - python-3.x

How do I connect to a MySQL database using Python's peewee library while specifying the tls-versions ['TLSv1.1', 'TLSv1.2'] as shown here and here in the MySQL documentation?
I can successfully connect to the MySQL database using mysql.connector as shown here:
import mysql.connector
config = {'database': 'dbname',
'user': 'username_so',
'password': 'psswrd',
'host': 'maria####-##-###-##.###.####.com',
'port': 3306,
'tls_versions': ['TLSv1.1', 'TLSv1.2']}
cnx = mysql.connector.connect(**config)
cnx.close()
However, I am unable to pass the 'tls_versions' parameter to peewee when establishing a connection. As a result, I get an error message:
ImproperlyConfigured: MySQL driver not installed!
I am pretty sure that the problem is with specifying the tls versions in peewee because I was getting the same error message with mysql.connector before I added in the additional 'tls_versions' parameter.
Here is the code I am using in peewee that is failing:
db = MySQLDatabase(**config)
db.get_tables() # This function and any other that connects to the db gets the same error message specified above
My Setup:
Linux
Python 3.7
peewee==3.13.3
mysql-connector-python==8.0.21

As I responded to your github issue:
You need to use playhouse.mysql_ext.MySQLConnectorDatabase to connect using the mysql-connector driver. That will resolve your issue.

Related

Monogdb / pymongo 'authenticate' method call fails because no such method exists

I have problem connecting to MongoDb using pymongo driver (Python 3.10, pymongo 4.2.0, MongoDb 6), specifically, authentication steps fails. Please see below my code:
import pymongo
from pymongo import MongoClient
client=MongoClient('mongodb://10.10.1.8:27017')
client.admin.authenticate('dev01','my_password_here')
I am behind company firewall, that's why you see internal IP address - 10.10.1.8
When I try to test run Python code to store data, I am getting the following error:
File "C:\Users\ABC\Documents\python_development\pymongo_test.py", line 7, in <module>
client.admin.authenticate('dev01','my_password_here')
File "C:\Users\ABC\AppData\Local\Programs\Python\Python310\lib\site-packages\pymongo\collection.py", line 3194,
in __call__ raise TypeError(
TypeError: 'Collection' object is not callable. If you meant to call the 'authenticate' method
on a 'Database' object it is failing because no such method exists.
MongoDb sits on virtual Linux Ubuntu server that sits on top of Linux Debian server. My laptop has Windows 10.
The code looks correct, any ideas why this is happening?
Migration guide from pymongo 3.x to 4.x
https://pymongo.readthedocs.io/en/stable/migrate-to-pymongo4.html?highlight=authenticate#database-authenticate-and-database-logout-are-removed reads:
Removed pymongo.database.Database.authenticate() and pymongo.database.Database.logout(). Authenticating multiple users on the same client conflicts with support for logical sessions in MongoDB 3.6+. To authenticate as multiple users, create multiple instances of MongoClient. Code like this:
client = MongoClient()
client.admin.authenticate('user1', 'pass1')
client.admin.authenticate('user2', 'pass2')
can be changed to this:
client1 = MongoClient(username='user1', password='pass1')
client2 = MongoClient(username='user2', password='pass2')

load pandas dataframe into Redshift

I am trying to load the pandas dataframe into Redshift, but it keeps giving me an error. Please guide me on the same. Need help on correcting the cluster configuration to make it work successfully.
Below is my code and error traceback:
from sqlalchemy import create_engine
import pyodbc
import psycopg2
username = "#####"
host = "redshift-cluster-****.*****.ap-south-1.redshift.amazonaws.com"
driver = "Amazon Redshift (x64)"
port = 5439
pwd = "******"
db = "dev"
table = "tablename"
rs_engine = create_engine(f"postgresql://{username}:{pwd}#{host}:{port}/{db}")
df.to_sql(table, con=rs_engine, if_exists='replace',index=False)
Traceback:
OperationalError: (psycopg2.OperationalError) connection to server at "redshift-cluster-****.****.ap-south-1.redshift.amazonaws.com" (3.109.77.136), port 5439 failed: Connection timed out (0x0000274C/10060)
Is the server running on that host and accepting TCP/IP connections?
Even tried the below options, but getting the same error,
rs_engine = create_engine(f"redshift+psycopg2://{username}#{host}:{port}/{db}")
rs_engine = create_engine(f"postgresql+psycopg2://{username}:{pwd}#{host}:{port}/{db}")
rs_engine = redshift_connector.connect(
host='redshift-cluster-####.****.ap-south-1.redshift.amazonaws.com',
database='dev',
user='****',
password='#####'
)
Also, have the Public accessible setting Enabled in Redshift cluster. Still unable to connect and load the data.
UPDATE:
Also tried using ODBC Driver, but getting the same error,
import pyodbc
cnxn = pyodbc.connect(Driver=driver,
Server=host,
Database=db,
UID=username,PWD=pwd,Port=port)
When tried to setup using ODBC Datasources app, getting the same error,

AWS Lambda issue connecting to DocumentDB

Okay, I have written an AWS Lambda function which pulls data from an API and inserts the data into a DocumentDB database. When I connect to my cluster from the shell and run my python script it works just fine and inserts the data no problem.
But, when I implement the same logic into a lambda function is does not work. Below is an example of what would work in the shell but not through a Lambda function.
import urllib3
import json
import certifi
import pymongo
from pymongo import MongoClient
# Make our connection to the DocumentDB cluster
# (Here I use the DocumentDB URI)
client = MongoClient('mongodb://admin_name_here:<insertYourPassword>my_docdb_cluster/?ssl=true&ssl_ca_certs=rds-combined-ca-bundle.pem&retryWrites=false')
# Specify the database to use
db = client.my_db
# Specify the collection to use
col = db.my_col
col.insert_one({"name": "abcdefg"})
The above works just fine in the shell but when run in Lambda I get the following error:
[ERROR] ServerSelectionTimeoutError: my_docdb_cluster timed out, Timeout: 30s, Topology Description: <TopologyDescription id: ***********, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription (my_docdb_cluster) server_type: Unknown, rtt: None, error=NetworkTimeout(my_docdb_cluster timed out')>]>
From my understanding, this error is telling me that the replica set has no primary. But, that is not true there definitely is a primary in my replica set. Does anyone know what could be the problem here?

Connecting to Azure PostgreSQL server from python psycopg2 client

I have trouble connecting to the Azure postgres database from python. I am following the guide here - https://learn.microsoft.com/cs-cz/azure/postgresql/connect-python
I have basically the same code for setting up the connection.
But the psycopg2 and SQLalchemy throw me the same error:
OperationalError: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
I am able to connect to the instance by other client tools like dbeaver but from python it does not work.
When I investigate in Postgres logs I can see that the server actually authorized the connection but the next line says
could not receive data from client: An existing connection was forcibly closed by the remote host.
Python is 3.7
psycopg's version is 2.8.5
Azure Postgres region is in West Europe
Does someone has any suggestion on what should I try to make it work?
Thank you!
EDIT:
The issue resolved itself. I tried the same setup a few days later and it started working. Might have been something wrong with the Azure West Europe.
I had this issue too. I think I read somewhere (I forget where) that Azure has an issue with the # you have to for the username (user#serverName).
I created variables and an f-string and then it worked OK.
import sqlalchemy
username = 'user#server_name'
password = 'PassWord!'
host = 'server_name.postgres.database.azure.com'
database = 'your_database'
conn_str = f'postgresql+psycopg2://{username}:{password}#{host}/{database}'
After that:
engine = sqlalchemy.create_engine(conn_str, pool_pre_ping=True)
conn = engine.connect()
Test it with a simple SQL statement.
sql = 'SELECT * FROM public.some_table;'
results = conn.engine.execute(sql)
This was a connection in UK South. Before that it did complain about the format of the username having to use #, although the username was correct, as tested from the command line with PSQL and another SQL client.

Why can't I connect to Oracle DB with SQLAlchemy?

I'm trying to connect to a oracle DB with SQLAlchemy however I get the following error:
ORA-12545: Connect failed because target host or object does not exist
Note that the code running this is on a docker container that is located on a vm in GCP.
I tried using tools like telnet, curl, nmap, etc and they all are able to connect/say open. So I don't see why connecting through python would all of a sudden make it not visible.
Here is the code that is used to try to connect.
from sqlalchemy.orm.session import sessionmaker
from framework.db import BuildOracleConnection
Creds_Oracle = {
'userName': 'urname',
'password': 'pass',
'host': '10.10.10.10',
'port': '1521',
'serviceName': 'svcName'
}
Conn_Oracle = BuildOracleConnection(Creds_Oracle)
metaConn = sessionmaker(bind=Conn_Oracle)
metaSession = metaConn()
sql = 'select * from table'
sql = sql.replace('\n', ' ')
sourceExtract = metaSession.execute(sql)
The part that throws the error is the last line.
I expect to be able to connect but instead I get the following error:
ORA-12545: Connect failed because target host or object does not exist.
For some reason I wasn't able to connect directly to the loadbalancer, instead I had to connect to the nodes themselves.

Resources