Connecting pymongo client to mongodb server with TLS - python-3.x

I have 2 instances on Google Cloud :
Instance A and Instance B - Both have a static external IP address.
Instance A running the community edition of the MongoDB server v4.4.6.
I have generated self-signed certificates to enable TLS
I have set the firewall rules in my cloud network to allow traffic to the MongoDB port from Instance Bs IP address
As such, I am successfully able to use the mongo shell(v4.4.6) in Instance B to connect to the mongo server running on Instance A. This is the command that I use -
mongo --tls --tlsCertificateKeyFile client.pem --tlsCAFile ca.pem <instance_a_ip>:<port>/admin -u <userName> -p
I would like to use the pymongo(v3.11.4) client from Instance B in order to connect to my MongoDB server in Instance A and I have tried that using this in an interactive python shell -
client = MongoClient("mongodb://<instance_a_ip>:<port>/admin", tls=True, tlsCertificateKeyFile='./client.pem', tlsCAFile='./ca.pem', username='<userName>', password='<userPassword>')
However, I am not able to connect and this is the error that I receive -
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/varun/test-env/lib/python3.8/site-packages/pymongo/collection.py", line 1319, in find_one
for result in cursor.limit(-1):
File "/home/varun/test-env/lib/python3.8/site-packages/pymongo/cursor.py", line 1207, in next
if len(self.__data) or self._refresh():
File "/home/varun/test-env/lib/python3.8/site-packages/pymongo/cursor.py", line 1100, in _refresh
self.__session = self.__collection.database.client._ensure_session()
File "/home/varun/test-env/lib/python3.8/site-packages/pymongo/mongo_client.py", line 1816, in _ensure_session
return self.__start_session(True, causal_consistency=False)
File "/home/varun/test-env/lib/python3.8/site-packages/pymongo/mongo_client.py", line 1766, in __start_session
server_session = self._get_server_session()
File "/home/varun/test-env/lib/python3.8/site-packages/pymongo/mongo_client.py", line 1802, in _get_server_session
return self._topology.get_server_session()
File "/home/varun/test-env/lib/python3.8/site-packages/pymongo/topology.py", line 496, in get_server_session
self._select_servers_loop(
File "/home/varun/test-env/lib/python3.8/site-packages/pymongo/topology.py", line 215, in _select_servers_loop
raise ServerSelectionTimeoutError(
pymongo.errors.ServerSelectionTimeoutError: <instance_a_ip>:<port>: ("Invalid DNS pattern b'127.0.0.1'.",), Timeout: 30s, Topology Description: <TopologyDescription id: 60ad03827b267af40c2edf4b, topology_type: Single, servers: [<ServerDescription ('<instance_a_ip>', <port>) server_type: Unknown, rtt: None, error=AutoReconnect('<instance_a_ip>:<port>: ("Invalid DNS pattern b\'127.0.0.1\'.",)')>]>
I am new to MongoDB and not able to figure out how to go about this, help would be greatly appreciated.

Debugged the issue by installing the Nodejs client for MongoDB. The Node client provided a much better message upon failure -
[Error [ERR_TLS_CERT_ALTNAME_INVALID]: Hostname/IP does not match certificate's altnames: IP: 34.126.133.72 is not in the cert's list
Thanks to the meaningful error, I read through the OpenSSL configuration file that I had used while creating the self-signed certificates. Rectified the error I made in the config file as such -
Original config file which caused the error
[ v3_req ]
subjectAltName = #alt_names
[ alt_names ]
DNS.1 = 127.0.0.1
DNS.2 = <instance_a_ip>
Rectified config file which now works with all MongoDB clients
[ v3_req ]
subjectAltName = #alt_names
[ alt_names ]
IP.1 = 127.0.0.1
IP.2 = <instance_a_ip>

Your certificate is self-signed, add this option when create MongoClient.
tlsInsecure=True
code will be like this
client = MongoClient(
["<instance_a_ip>:<port>"],
tls=True,
tlsInsecure=True,
tlsCertificateKeyFile='./client.pem',
tlsCAFile='./ca.pem',
username='<userName>',
password='<userPassword>'
)

Related

How to connect to database via asyncssh (Tunnel) and psycop2?

python 3.11 / asyncssh 2.13 / psycopg2-binary 2.9.5
After using the sshtunnel library i wanted to switch to the asyncssh library because its more maintained with newer Python Versions and brings async benefits.
After reading the asyncssh Doc i wrote a little test script and could connect via SSH from my local computer to my Server. I execute some ls terminal statement on my server and could print the outcome with my python script. After that small success i wanted to connect to my postgreSQL Database via the asyncssh Tunnel, so i can push my local panda data to my server database.
In the past it worked well with the sshtunnel library. Unfortunately i fail with asyncssh library to establish a connection to my database on my server.
Problem
My main Problem is to wrap psycopg2 into the tunnel like in sshtunnel with remote_bind_address. I tried with forward_local_port or forward_remote_port and it seems the connection is established but how to funnel psycopg2 into it? Instead of connecting to my 'Server-Database Port 5432' i should connect to the tunnel port (see example 2)?
Example: How sshtunnel worked.
(this is an from geo.rocks but i used that structure also).
import psycopg2
from sshtunnel import SSHTunnelForwarder
try:
with SSHTunnelForwarder(
('some.linktodb.com', 22), # port 22 as standard SSH port
ssh_username="username",
ssh_pkey="your/private/key", # your private key file
ssh_private_key_password="****",
remote_bind_address=('localhost', 5432)) as server: # mirroring to local port 5432
server.start()
params = { # database params
'database': 'test',
'user': 'dome',
'password': '*****',
'host': 'localhost',
'port': server.local_bind_port
}
conn = psycopg2.connect(**params)
curs = conn.cursor() # if this works, you are connected
print("DB connected")
except:
print("Connection failed")
Example: My current asyncssh approach.
async def run_client() -> None:
async with asyncssh.connect(
host=os.environ.get('SSH_HOST'),
known_hosts=None,
username=os.environ.get('SSH_USER'),
passphrase=os.environ.get('SSH_PW'),
client_keys=os.environ.get('SSH_PRIVAT_KEY')) as tunnel:
listener = await tunnel.forward_local_port(
listen_host='',
listen_port=8084,
dest_host='127.0.0.1',
dest_port=5432)
conn = psycopg2.connect(
host=os.environ.get('DB_HOST'),
port=os.environ.get('DB_PORT'), # <- `local_bind_port` like in sshtunnel?
database=os.environ.get('DB_NAME'),
user=os.environ.get('DB_USER'),
password=os.environ.get('DB_PW'),
)
OUTPUT:
...
[ 2023-01-23,12:17:17.+0100 ] [INFO] logging.py - log - [conn=0] Creating local TCP forwarder from port 8084 to 127.0.0.1, port 5432
[ 2023-01-23,12:17:17.+0100 ] [INFO] logging.py - log - [conn=0] Closing connection
[ 2023-01-23,12:17:17.+0100 ] [INFO] logging.py - log - [conn=0] Sending disconnect: Disconnected by application (11)
[ 2023-01-23,12:17:17.+0100 ] [INFO] logging.py - log - [conn=0] Connection closed
Traceback (most recent call last):
File "/home/user/atlas/user_atlas/mono/tutorials/python/db_ssh_activity.py", line 135, in <module>
loop.run_until_complete(coroutine)
File "/usr/lib64/python3.11/asyncio/base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/home/user/atlas/user_atlas/mono/tutorials/python/db_ssh_activity.py", line 125, in main
await run_client()
File "/home/user/atlas/user_atlas/mono/tutorials/python/db_ssh_activity.py.py", line 82, in run_client
conn = psycopg2.connect(
^^^^^^^^^^^^^^^^^
File "/home/user/atlas/mono/mono_env/lib64/python3.11/site-packages/psycopg2/__init__.py", line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
psycopg2.OperationalError: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
Is the server running locally and accepting connections on that socket?
EXAMPLE 2:
Instead of connecting to my 'Server-Database Port 5432' i should connect to the tunnel port
async def run_client() -> None:
async with asyncssh.connect(
host=os.environ.get('SSH_HOST'),
known_hosts=None,
username=os.environ.get('SSH_USER'),
passphrase=os.environ.get('SSH_PW'),
client_keys=os.environ.get('SSH_PRIVAT_KEY')) as tunnel:
tunnel.forward_remote_port
listener = await tunnel.forward_local_port(
listen_host='localhost',
listen_port=1,
dest_host='127.0.0.1',
dest_port=5432)
conn = psycopg2.connect(
host='localhost',
port=listener.get_port(),
database=os.environ.get('DB_NAME'),
user=os.environ.get('DB_USER'),
password=os.environ.get('DB_PW'),
)
listener.wait_closed()
OUTPUT:
[ 2023-01-23,12:40:46.+0100 ] [INFO] logging.py - log - [conn=0] Auth for user hendrix succeeded
[ 2023-01-23,12:40:46.+0100 ] [INFO] logging.py - log - [conn=0] Creating local TCP forwarder from localhost, port 1 to 127.0.0.1, port 5432
[ 2023-01-23,12:40:46.+0100 ] [DEBUG] logging.py - log - [conn=0] Failed to create local TCP listener: [Errno 13] error while attempting to bind on address ('::1', 1, 0, 0): Permission denied
[ 2023-01-23,12:40:46.+0100 ] [INFO] logging.py - log - [conn=0] Closing connection
[ 2023-01-23,12:40:46.+0100 ] [INFO] logging.py - log - [conn=0] Sending disconnect: Disconnected by application (11)
[ 2023-01-23,12:40:46.+0100 ] [INFO] logging.py - log - [conn=0] Connection closed
Traceback (most recent call last):
File "/home/user/atlas/mono/monokapi_jupyter/tutorials/python/db_ssh_activity.py", line 136, in <module>
loop.run_until_complete(coroutine)
File "/usr/lib64/python3.11/asyncio/base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/home/user/atlas/user_atlas/mono/tutorials/python/db_ssh_activity.py", line 126, in main
await run_client()
File "/home/user/atlas/user_atlas/mono/tutorials/python/db_ssh_activity.py.py", line 76, in run_client
listener = await tunnel.forward_local_port(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/atlas/user_atlas/mono_env/lib64/python3.11/site-packages/asyncssh/connection.py", line 2944, in forward_local_port
listener = await create_tcp_forward_listener(self, self._loop,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/atlas/user_atlas/mono_env/lib64/python3.11/site-packages/asyncssh/listener.py", line 341, in create_tcp_forward_listener
return await create_tcp_local_listener(conn, loop, protocol_factory,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/atlas/user_atlas/mono_env/lib64/python3.11/site-packages/asyncssh/listener.py", line 314, in create_tcp_local_listener
raise OSError(exc.errno, 'error while attempting ' # type: ignore
PermissionError: [Errno 13] error while attempting to bind on address ('::1', 1, 0, 0): Permission denied

Error 500 when trying to send file from remote server (python code) to an SFTP server

I have a Flask API in a Docker container where I do the following :
from flask import Flask, request
import os
import json
import paramiko
import subprocess
app = Flask(__name__)
#app.route("/")
def hello():
return "Service up in K8S!"
#app.route("/get", methods=['GET'])
def get_ano():
print("Test liveness")
return "Pod is alive !"
#app.route("/run", methods=['POST'])
def run_dump_generation():
rules_str = request.headers.get('database')
print(rules_str)
postgres_bin = r"/usr/bin/"
dump_file = "database_dump.sql"
os.environ['PGPASSWORD'] = 'XXXXX'
print('Before dump generation')
with open(dump_file, "w") as f:
result = subprocess.call([
os.path.join(postgres_bin, "pg_dump"),
"-Fp",
"-d",
"XX",
"-U",
"XX",
"-h",
"XX",
"-p",
"XX"
],
stdout=f
)
print('After dump generation')
transport = paramiko.Transport(("X", X))
transport.connect(username="X", password="X")
sftp = transport.open_sftp_client()
remote_file = '/data/database_dump.sql'
sftp.put('database_dump.sql', remote_file)
print("SFTP object", sftp)
if __name__ == "__main__":
app.run(host='0.0.0.0', debug=True)
When I run the app in Kubernetes with Post request I have the error : "POST /run HTTP/1.1" 500
Here are the requirements.txt:
Flask==2.0.1
paramiko==3.0.0
The error comes from transport = paramiko.Transport(("X", X)). The same code works locally. I don't understand why I have this error when I am on Kubernetes. But in the logs no print are displaying, I guess it is because I have error 500. I guess it is not possible with this code to send file from this container to the SFTP server (has OpenSSH).
What can I do ?
---- UPDATE ----
I think I have found the problem. In a Flask pod (VM) I try to send file from this pod to SFTP server. So I have to modify the following code to "allow" this type of send. This an SFTP server with OpenSSH.
Here is the code where to modify :
transport = paramiko.Transport(("X", X))
transport.connect(username="X", password="X")
sftp = transport.open_sftp_client()
remote_file = '/data/database_dump.sql'
sftp.put('database_dump.sql', remote_file)
print("SFTP object", sftp)
SFTP server (with OpenSSH) is alpine and Flask code is in Alpine container too.
UPDATE BELOW, I tried the following :
ssh=paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
session = ssh.connect(hostname="X", port=X, username='X', password="X")
print(ssh)
But I have the following error :
File "c:/Users/X/dump_generator_api/t.py", line 32, in <module>
session = ssh.connect(hostname="X", port=X, username='X', password="X")
File "C:\Users\X\AppData\Local\Programs\Python\Python38-32\lib\site-packages\paramiko\client.py", line 449, in connect
self._auth(
File "C:\Users\X\AppData\Local\Programs\Python\Python38-32\lib\site-packages\paramiko\client.py", line 780, in _auth
raise saved_exception
File "C:\Users\X\AppData\Local\Programs\Python\Python38-32\lib\site-packages\paramiko\client.py", line 767, in _auth
self._transport.auth_password(username, password)
File "C:\Users\X\AppData\Local\Programs\Python\Python38-32\lib\site-packages\paramiko\transport.py", line 1567, in auth_password
return self.auth_handler.wait_for_response(my_event)
File "C:\Users\X\AppData\Local\Programs\Python\Python38-32\lib\site-packages\paramiko\auth_handler.py", line 259, in wait_for_response
raise e
paramiko.ssh_exception.AuthenticationException: Authentication failed.

Hello , why my script run on local phycharm but when i upload my script on VPS ( host )i get errors?

why my script run on local phycharm but when i upload my script on VPS ( host )i get errors ?
my python code
`
import json
import socketio
TOKEN = "my-super-token" #You donation alert token
sio = socketio.Client()
#sio.on('connect')
def on_connect():
sio.emit('add-user', {"token": TOKEN, "type": "alert_widget"})
#sio.on('donation')
def on_message(data):
y = json.loads(data)
print(y['username'])
print(y['message'])
print(y['amount'])
print(y['currency'])
sio.connect('wss://socket.donationalerts.ru:443',transports='websocket')
`
from localhost , pycharmall work but from host i get this
error screenshot here
error :
Traceback (most recent call last): File "test.py", line 22, in
sio.connect('wss://socket.donationalerts.ru:443',transports='websocket')
File "/usr/local/lib/python3.8/site-packages/socketio/client.py", line
338, in connect
raise exceptions.ConnectionError(exc.args[0]) from None socketio.exceptions.ConnectionError: Connection error
i try install ,reinstall python , packages etc

Python 3.9 - Connect to memsql

I am trying to connect to memsql (running as docker container - cluster-in-a-box). I am using Python3.9. Tried with Python 3.8 as well.
Here is the code snippet:
from memsql.common import database
conn = database.connect(host="127.0.0.1", port=3306, user="root")
print(conn.query("show databases"))
When i run this, I am getting the following error:
Traceback (most recent call last):
File "/Users/ngarg/PycharmProjects/memsqlKafka/startup_try.py", line 3, in <module>
conn = database.connect(host="127.0.0.1", port=3306, user="root")
File "/Users/ngarg/Library/Python/3.9/lib/python/site-packages/memsql/common/database.py", line 19, in connect
return Connection(*args, **kwargs)
File "/Users/ngarg/Library/Python/3.9/lib/python/site-packages/memsql/common/database.py", line 62, in __init__
self.reconnect()
File "/Users/ngarg/Library/Python/3.9/lib/python/site-packages/memsql/common/database.py", line 93, in reconnect
conn = _mysql.connect(**self._db_args)
TypeError: 'db' is an invalid keyword argument for connect()
Try to google this, but didn’t find anything. I am blocked on this step.Any help is appreciated.
When you connect from the latest version of Django to SingleStore DB, you might receive the following error message:
django.db.utils.OperationalError: (2012, 'Error in server handshake')
To connect to SingleStore DB, you will need to configure the auth_plugin in the OPTIONS field.
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'HOST': '{HOST}',
'NAME': '{DBNAME}',
'USER': '{USERNAME}',
'PASSWORD' : '{PASSWORD}',
'PORT': '3306',
'OPTIONS': {
'auth_plugin': 'mysql_native_password'
}
}
}
https://support.singlestore.com/hc/en-us/articles/360057857552-Connecting-Django-to-SingleStore-DB

Docker-py client gives "invalid client port specification"

I am using the docker-py client to create containers on a need basis. So for this, I am using a generator to come up with a port number and trying to use httpd image on a particular port of the host from the generator. But, the client gives out ("invalid port specification: "port number here"") for any number, for any port number that I am trying to use.
Below is sample code that I am trying:
import docker
client = docker.from_env()
container= client.containers.run(image="httpd", ports={'80/tcp': 43545}, detach=True)
To note: The number 43545 does not have any significance here.
Docker details:
Client - 19.03.6
API Version - 1.40
Engine: 19.03.6
Error:
File "/project/api/.venv/lib/python3.7/site-packages/docker/models/containers.py", line 803, in run
detach=detach, **kwargs)
File "/project/api/.venv/lib/python3.7/site-packages/docker/models/containers.py", line 861, in create
resp = self.client.api.create_container(**create_kwargs)
File "/project/api/.venv/lib/python3.7/site-packages/docker/api/container.py", line 429, in create_container
return self.create_container_from_config(config, name)
File "/projectapi/.venv/lib/python3.7/site-packages/docker/api/container.py", line 440, in create_container_from_config
return self._result(res, True)
File "/projectpi/.venv/lib/python3.7/site-packages/docker/api/client.py", line 267, in _result
self._raise_for_status(response)
File "/projectapi/.venv/lib/python3.7/site-packages/docker/api/client.py", line 263, in _raise_for_status
raise create_api_error_from_http_exception(e)
File "/projectyter/api/.venv/lib/python3.7/site-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)
docker.errors.APIError: 400 Client Error: Bad Request ("invalid port specification: "43545"")

Resources