Monogdb / pymongo 'authenticate' method call fails because no such method exists - python-3.x

I have problem connecting to MongoDb using pymongo driver (Python 3.10, pymongo 4.2.0, MongoDb 6), specifically, authentication steps fails. Please see below my code:
import pymongo
from pymongo import MongoClient
client=MongoClient('mongodb://10.10.1.8:27017')
client.admin.authenticate('dev01','my_password_here')
I am behind company firewall, that's why you see internal IP address - 10.10.1.8
When I try to test run Python code to store data, I am getting the following error:
File "C:\Users\ABC\Documents\python_development\pymongo_test.py", line 7, in <module>
client.admin.authenticate('dev01','my_password_here')
File "C:\Users\ABC\AppData\Local\Programs\Python\Python310\lib\site-packages\pymongo\collection.py", line 3194,
in __call__ raise TypeError(
TypeError: 'Collection' object is not callable. If you meant to call the 'authenticate' method
on a 'Database' object it is failing because no such method exists.
MongoDb sits on virtual Linux Ubuntu server that sits on top of Linux Debian server. My laptop has Windows 10.
The code looks correct, any ideas why this is happening?

Migration guide from pymongo 3.x to 4.x
https://pymongo.readthedocs.io/en/stable/migrate-to-pymongo4.html?highlight=authenticate#database-authenticate-and-database-logout-are-removed reads:
Removed pymongo.database.Database.authenticate() and pymongo.database.Database.logout(). Authenticating multiple users on the same client conflicts with support for logical sessions in MongoDB 3.6+. To authenticate as multiple users, create multiple instances of MongoClient. Code like this:
client = MongoClient()
client.admin.authenticate('user1', 'pass1')
client.admin.authenticate('user2', 'pass2')
can be changed to this:
client1 = MongoClient(username='user1', password='pass1')
client2 = MongoClient(username='user2', password='pass2')

Related

SQLAlchemy / pymysql caching_sha2_password not configured

Bit of a Python newbie, and I have encountered an error message attempting to connect to a MySQL database using SQLAlchemy. I will note, this code works on another Linux server with the same installation (as far as I can tell) as well as a Windows server.
sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2059, "Authentication plugin 'b'caching_sha2_password'' not configured")
A search for that error message on StackOverflow returns no results, and searching on Google... also returned no results.
This is my connection code that is throwing the above error:
connection_string = f"mysql+pymysql://{username}:{password}#{hostname}:{port}/{db}"
connection_args = {'ssl': {'fake_flag_to_enable_tls': True}}
self._conn = create_engine(connection_string, connect_args=connection_args)

AWS Lambda issue connecting to DocumentDB

Okay, I have written an AWS Lambda function which pulls data from an API and inserts the data into a DocumentDB database. When I connect to my cluster from the shell and run my python script it works just fine and inserts the data no problem.
But, when I implement the same logic into a lambda function is does not work. Below is an example of what would work in the shell but not through a Lambda function.
import urllib3
import json
import certifi
import pymongo
from pymongo import MongoClient
# Make our connection to the DocumentDB cluster
# (Here I use the DocumentDB URI)
client = MongoClient('mongodb://admin_name_here:<insertYourPassword>my_docdb_cluster/?ssl=true&ssl_ca_certs=rds-combined-ca-bundle.pem&retryWrites=false')
# Specify the database to use
db = client.my_db
# Specify the collection to use
col = db.my_col
col.insert_one({"name": "abcdefg"})
The above works just fine in the shell but when run in Lambda I get the following error:
[ERROR] ServerSelectionTimeoutError: my_docdb_cluster timed out, Timeout: 30s, Topology Description: <TopologyDescription id: ***********, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription (my_docdb_cluster) server_type: Unknown, rtt: None, error=NetworkTimeout(my_docdb_cluster timed out')>]>
From my understanding, this error is telling me that the replica set has no primary. But, that is not true there definitely is a primary in my replica set. Does anyone know what could be the problem here?

How to specify tls version 'TLSv1.2' when connecting to MySQL using Python peewee

How do I connect to a MySQL database using Python's peewee library while specifying the tls-versions ['TLSv1.1', 'TLSv1.2'] as shown here and here in the MySQL documentation?
I can successfully connect to the MySQL database using mysql.connector as shown here:
import mysql.connector
config = {'database': 'dbname',
'user': 'username_so',
'password': 'psswrd',
'host': 'maria####-##-###-##.###.####.com',
'port': 3306,
'tls_versions': ['TLSv1.1', 'TLSv1.2']}
cnx = mysql.connector.connect(**config)
cnx.close()
However, I am unable to pass the 'tls_versions' parameter to peewee when establishing a connection. As a result, I get an error message:
ImproperlyConfigured: MySQL driver not installed!
I am pretty sure that the problem is with specifying the tls versions in peewee because I was getting the same error message with mysql.connector before I added in the additional 'tls_versions' parameter.
Here is the code I am using in peewee that is failing:
db = MySQLDatabase(**config)
db.get_tables() # This function and any other that connects to the db gets the same error message specified above
My Setup:
Linux
Python 3.7
peewee==3.13.3
mysql-connector-python==8.0.21
As I responded to your github issue:
You need to use playhouse.mysql_ext.MySQLConnectorDatabase to connect using the mysql-connector driver. That will resolve your issue.

Connecting to MongoDB in Kubernetes pod with kubernetes-client using Python

I have a MongoDB instance running on Kubernetes and I'm trying to connect to it using Python with the Kubernetes library.
I'm connecting to the context on cmd line using:
kubectl config use-context CONTEXTNAME
With Python, I'm using:
from kubernetes import client, config
config.load_kube_config(
context = 'CONTEXTNAME'
)
To connect to MongoDB in cmd line:
kubectl port-forward svc/mongo-mongodb 27083:27017 -n production &
I then open a new terminal and use PORT_FORWARD_PID=$! to connect
I'm trying to get connect to the MongoDB instance using Python with the Kubernetes-client library, any ideas as to how to accomplish the above?
Define a kubernetes service for example like this, and then reference your mongodb using a connection string similar to mongodb://<service-name>.default.svc.cluster.local
My understanding is that you need to find out your DB Client Endpoint.
That could be achieved if you follow this article MongoDB on K8s
make sure you got the URI for MongoDB.
(example)
“mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo:27017/dbname\_?”
and after that, you can call your DB client in Python script.
import pymongo
import sys
##Create a MongoDB client
client = pymongo.MongoClient('mongodb://......')
##Specify the database to be used
db = client.test
##Specify the collection to be used
col = db.myTestCollection
##Insert a single document
col.insert_one({'hello':'world'})
##Find the document that was previously written
x = col.find_one({'hello':'world'})
##Print the result to the screen
print(x)
##Close the connection
client.close()
Hope that will give you an idea.
Good luck!

Flask SQLalchemy can't connect to Google Cloud Postgresql database with Unix socket

I am using Flask SQLalchemy in my google app engine standard environment project to try and connect to my GCP Postgresql database..
According to google docs, the url can be created in this format
# postgres+pg8000://<db_user>:<db_pass>#/<db_name>?unix_socket=/cloudsql/<cloud_sql_instance_name>
and below is my code
from flask import Flask, request, jsonify
import constants
app = Flask(__name__)
# Database configuration from GCP postgres+pg8000
DB_URL = 'postgres+pg8000://{user}:{pw}#/{db}?unix_socket=/cloudsql/{instance_name}'.format(user=user,pw=password,db=dbname, instance_name=instance_name)
app.config['SQLALCHEMY_DATABASE_URI'] = DB_URL
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False # silence the
deprecation warning
sqldb = SQLAlchemy(app)
This is the error i keep getting:
File "/env/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 412, in connect return self.dbapi.connect(*cargs, **cparams) TypeError: connect() got an unexpected keyword argument 'unix_socket'
The argument to specify a unix socket varies depending on what driver you use. According to the pg8000 docs, you need to use unix_sock instead of unix_socket.
To see this in the context of an application, you can take a look at this sample application.
It's been more than 1.5 years and no one has posted the solution yet :)
Anyway, just use the below URI
postgres+psycopg2://<db_user>:<db_pass>#<public_ip>/<db_name>?host=/cloudsql/<cloud_sql_instance_name>
And yes, don't forget to add your systems public IP address to the authorized network.
Example of docs
As you can read in the gcloud guides, an examplary connection string is
postgres+pg8000://<db_user>:<db_pass>#/<db_name>?unix_sock=<socket_path>/<cloud_sql_instance_name>/.s.PGSQL.5432
Varying engine and socket part
Be aware that the engine part postgres+pg8000 varies depending on your database and used driver. Also, depending on your database client library, the socket part ?unix_sock=<socket_path>/<cloud_sql_instance_name>/.s.PGSQL.5432 may be needed or can be omitted, as per:
Note: The PostgreSQL standard requires a .s.PGSQL.5432 suffix in the socket path. Some libraries apply this suffix automatically, but others require you to specify the socket path as follows: /cloudsql/INSTANCE_CONNECTION_NAME/.s.PGSQL.5432.
PostgreSQL and flask_sqlalchemy
For instance, I am using PostgreSQL with flask_sqlalchemy as database client and pg8000 as driver and my working connection string is only postgres+pg8000://<db_user>:<db_pass>#/<db_name>.

Resources