I have a MongoDB instance running on Kubernetes and I'm trying to connect to it using Python with the Kubernetes library.
I'm connecting to the context on cmd line using:
kubectl config use-context CONTEXTNAME
With Python, I'm using:
from kubernetes import client, config
config.load_kube_config(
context = 'CONTEXTNAME'
)
To connect to MongoDB in cmd line:
kubectl port-forward svc/mongo-mongodb 27083:27017 -n production &
I then open a new terminal and use PORT_FORWARD_PID=$! to connect
I'm trying to get connect to the MongoDB instance using Python with the Kubernetes-client library, any ideas as to how to accomplish the above?
Define a kubernetes service for example like this, and then reference your mongodb using a connection string similar to mongodb://<service-name>.default.svc.cluster.local
My understanding is that you need to find out your DB Client Endpoint.
That could be achieved if you follow this article MongoDB on K8s
make sure you got the URI for MongoDB.
(example)
“mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo:27017/dbname\_?”
and after that, you can call your DB client in Python script.
import pymongo
import sys
##Create a MongoDB client
client = pymongo.MongoClient('mongodb://......')
##Specify the database to be used
db = client.test
##Specify the collection to be used
col = db.myTestCollection
##Insert a single document
col.insert_one({'hello':'world'})
##Find the document that was previously written
x = col.find_one({'hello':'world'})
##Print the result to the screen
print(x)
##Close the connection
client.close()
Hope that will give you an idea.
Good luck!
Related
I have my RDS(Postgresql) database in Private subnet.
I want to query this db using a Python Program
Is this possible ?
I have a bastion running SSM and I can easily connect to the bastion without any keys and then connect to the DB.
Is there a way of doing port forwarding in a python program ?
THANKS
Actually, it very simple if you use the article - https://aws.amazon.com/blogs/aws/new-port-forwarding-using-aws-system-manager-sessions-manager/
just run,
aws ssm start-session --target $INSTANCE_ID
this will create a connection to the ec2. After this you can run any python program by using psycopg2
import psycopg2
connection = psycopg2.connect(user="joe",
password="joe",
host="######",
port="5432",
database="stackdb")
Just putting here as it might help someone
I have trouble connecting to the Azure postgres database from python. I am following the guide here - https://learn.microsoft.com/cs-cz/azure/postgresql/connect-python
I have basically the same code for setting up the connection.
But the psycopg2 and SQLalchemy throw me the same error:
OperationalError: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
I am able to connect to the instance by other client tools like dbeaver but from python it does not work.
When I investigate in Postgres logs I can see that the server actually authorized the connection but the next line says
could not receive data from client: An existing connection was forcibly closed by the remote host.
Python is 3.7
psycopg's version is 2.8.5
Azure Postgres region is in West Europe
Does someone has any suggestion on what should I try to make it work?
Thank you!
EDIT:
The issue resolved itself. I tried the same setup a few days later and it started working. Might have been something wrong with the Azure West Europe.
I had this issue too. I think I read somewhere (I forget where) that Azure has an issue with the # you have to for the username (user#serverName).
I created variables and an f-string and then it worked OK.
import sqlalchemy
username = 'user#server_name'
password = 'PassWord!'
host = 'server_name.postgres.database.azure.com'
database = 'your_database'
conn_str = f'postgresql+psycopg2://{username}:{password}#{host}/{database}'
After that:
engine = sqlalchemy.create_engine(conn_str, pool_pre_ping=True)
conn = engine.connect()
Test it with a simple SQL statement.
sql = 'SELECT * FROM public.some_table;'
results = conn.engine.execute(sql)
This was a connection in UK South. Before that it did complain about the format of the username having to use #, although the username was correct, as tested from the command line with PSQL and another SQL client.
We are connecting to oracle from python using cx_oracle package.
But the user_id, password and SID details are hardcoded in that.
My question is, is there any way to create a Datasource kind of thing? Or how we will deploy such python script sin production?
The database is in a Linux box and python is installed in another Linux box(Weblogic server is also installed in this Linux box).
import cx_Oracle
con = cx_Oracle.connect('pythonhol/welcome#127.0.0.1/orcl')
print con.version
Expectation is :
Can we deploy python in a production instance?
If yes how can we connect to the database by hiding the DB credentials?
Use some kind of 'external authentication', for example a wallet. See the cx_Oracle documentation https://cx-oracle.readthedocs.io/en/latest/user_guide/connection_handling.html#connecting-using-external-authentication
In summary:
create a wallet with mkstore which contains the username/password credentials.
copy the wallet to the machines that are running Python
make sure no bad people can access the wallet
configure Oracle Net files to point to the wallet
your scripts would connect like
connection = cx_Oracle.connect(dsn="mynetalias", encoding="UTF-8")
or
pool = cx_Oracle.SessionPool(externalauth=True, homogeneous=False, dsn="mynetalias",
encoding="UTF-8")
pool.acquire()
I am using Flask SQLalchemy in my google app engine standard environment project to try and connect to my GCP Postgresql database..
According to google docs, the url can be created in this format
# postgres+pg8000://<db_user>:<db_pass>#/<db_name>?unix_socket=/cloudsql/<cloud_sql_instance_name>
and below is my code
from flask import Flask, request, jsonify
import constants
app = Flask(__name__)
# Database configuration from GCP postgres+pg8000
DB_URL = 'postgres+pg8000://{user}:{pw}#/{db}?unix_socket=/cloudsql/{instance_name}'.format(user=user,pw=password,db=dbname, instance_name=instance_name)
app.config['SQLALCHEMY_DATABASE_URI'] = DB_URL
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False # silence the
deprecation warning
sqldb = SQLAlchemy(app)
This is the error i keep getting:
File "/env/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 412, in connect return self.dbapi.connect(*cargs, **cparams) TypeError: connect() got an unexpected keyword argument 'unix_socket'
The argument to specify a unix socket varies depending on what driver you use. According to the pg8000 docs, you need to use unix_sock instead of unix_socket.
To see this in the context of an application, you can take a look at this sample application.
It's been more than 1.5 years and no one has posted the solution yet :)
Anyway, just use the below URI
postgres+psycopg2://<db_user>:<db_pass>#<public_ip>/<db_name>?host=/cloudsql/<cloud_sql_instance_name>
And yes, don't forget to add your systems public IP address to the authorized network.
Example of docs
As you can read in the gcloud guides, an examplary connection string is
postgres+pg8000://<db_user>:<db_pass>#/<db_name>?unix_sock=<socket_path>/<cloud_sql_instance_name>/.s.PGSQL.5432
Varying engine and socket part
Be aware that the engine part postgres+pg8000 varies depending on your database and used driver. Also, depending on your database client library, the socket part ?unix_sock=<socket_path>/<cloud_sql_instance_name>/.s.PGSQL.5432 may be needed or can be omitted, as per:
Note: The PostgreSQL standard requires a .s.PGSQL.5432 suffix in the socket path. Some libraries apply this suffix automatically, but others require you to specify the socket path as follows: /cloudsql/INSTANCE_CONNECTION_NAME/.s.PGSQL.5432.
PostgreSQL and flask_sqlalchemy
For instance, I am using PostgreSQL with flask_sqlalchemy as database client and pg8000 as driver and my working connection string is only postgres+pg8000://<db_user>:<db_pass>#/<db_name>.
I am using HAProxy to for AWS RDS (MySQL) load balancing for my app, that is written using Flask.
The HAProxy.cfg file has following configuration for the DB
listen mysql
bind 127.0.0.1:3306
mode tcp
balance roundrobin
option mysql-check user haproxy_check
option log-health-checks
server db01 MASTER_DATABSE_ENDPOINT.rds.amazonaws.com
server db02 READ_REPLICA_ENDPOINT.rds.amazonaws.com
I am using SQLALCHEMY and it's URI is:
SQLALCHEMY_DATABASE_URI = 'mysql+pymysql://USER:PASSWORD#127.0.0.1:3306/DATABASE'
but when I am running an API in my test environment, the APIs that are just reading stuff from DB are executing just fine but the APIs that are writing something to DB are giving me errors mostly that:
(pymysql.err.InternalError) (1290, 'The MySQL server is running with the --read-only option so it cannot execute this statement')
I think I need to use 2 URLs now in this scenario, one for read-only operation and one for writes.
How does this work with Flask and SQLALCHEMY with HAProxy?
How do I tell my APP to use one URL for write operations and other HAProxy URL to read-only operations?
I didn't find any help from the documentation of SQLAlchemy.
Binds
Flask-SQLAlchemy can easily connect to multiple databases. To achieve
that it preconfigures SQLAlchemy to support multiple “binds”.
SQLALCHEMY_DATABASE_URI = 'mysql+pymysql://USER:PASSWORD#DEFAULT:3306/DATABASE'
SQLALCHEMY_BINDS = {
'master': 'mysql+pymysql://USER:PASSWORD#MASTER_DATABSE_ENDPOINT:3306/DATABASE',
'read': 'mysql+pymysql://USER:PASSWORD#READ_REPLICA_ENDPOINT:3306/DATABASE'
}
Referring to Binds:
db.create_all(bind='read') # from read only
db.create_all(bind='master') # from master