Connect error in PyMySQL code Windows 10 Python 3.5.0? - python-3.x

I'm trying to use PyMySQL 0.79 with python 3.5.0 under Windows 10 but get a connect error when running a simple program I called 'test.py'...
import pymysql
db = pymysql.connect( host = 'sqlserver.example.com', passwd 'SECRET12345', user = 'dbuser', db='myDatabase')
cursor = db.cursor()
sql = "INSERT INTO people (name, email, age) VALUES (%s, %s, %s)"
cursor.execute( sql, ( "Jay", "jay#example.com", '77') )
cursor.close()
db.commit() #Makes sure the DB saves your changes!
db.close()
But the above code gives me the error message:
C:\Users\Avtar\AppData\Local\Programs\Python\Python35-32\Lib\site-packages\pymysql>test.py
Traceback (most recent call last):
File "C:\Users\Avtar\AppData\Local\Programs\Python\Python35-32\lib\site-packages\pymysql\connections.py", line 890, in connect
(self.host, self.port), self.connect_timeout)
File "C:\Users\Avtar\AppData\Local\Programs\Python\Python35-32\lib\socket.py", line 689, in create_connection
for res in getaddrinfo(host, port, 0, SOCK_STREAM):
File "C:\Users\Avtar\AppData\Local\Programs\Python\Python35-32\lib\socket.py", line 728, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 11001] getaddrinfo failed
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Avtar\AppData\Local\Programs\Python\Python35-32\Lib\site-packages\pymysql\test.py", line 2, in <module>
db = pymysql.connect( host = 'sqlserver.example.com', passwd = 'SECRET12345', user = 'dbuser', db='myDatabase')
File "C:\Users\Avtar\AppData\Local\Programs\Python\Python35-32\lib\site-packages\pymysql\__init__.py", line 90, in Connect
return Connection(*args, **kwargs)
File "C:\Users\Avtar\AppData\Local\Programs\Python\Python35-32\lib\site-packages\pymysql\connections.py", line 688, in __init__
self.connect()
File "C:\Users\Avtar\AppData\Local\Programs\Python\Python35-32\lib\site-packages\pymysql\connections.py", line 937, in connect
raise exc
pymysql.err.OperationalError: (2003, "Can't connect to MySQL server on 'sqlserver.example.com' ([Errno 11001] getaddrinfo failed)")
I installed PyMySQL using the command 'pip install pymysql' and to check the installation I typed:
'pip list' which gives me:
pip (9.0.1)
PyMySQL (0.7.9)
setuptools (18.2)
yolk (0.4.3)
'pip show pymysql' which gives me:
Name: PyMySQL
Version: 0.7.9
Summary: Pure Python MySQL Driver
Home-page: https://github.com/PyMySQL/PyMySQL/
Author: INADA Naoki
Author-email: songofacandy#gmail.com
License: MIT
Location: c:\users\avtar\appdata\local\programs\python\python35-32\lib\site-packages
Requires:
I cannot tell from this what I am doing wrong, so would really appreciate if anyone can help me sort this. Thanks in advance.

First and foremost, hosts of type: "sqlserver.example.com", is usually Microsoft SQL Server , and not MySQL. If that's the case (ie you are using SQL Server), then instead of using the pymysql library, you should use a more generic library; such as pyodbc.
Based on a another stackoverflow question, for Python2.7, the way to use that library is as follows:
cnxn = pyodbc.connect('DRIVER={SQL Server};SERVER=sqlserver.example.com;DATABASE=testdb;UID=me;PWD=pass')
cursor = cnxn.cursor()
However, if infact you are using MySQL, then your configuration is incorrect...
Your code, notice it does not have "=" between passwd and your password; nor does it have the port number.
db = pymysql.connect( host = 'sqlserver.example.com', passwd 'SECRET12345', user = 'dbuser', db='myDatabase')
As such I would suggest:
db = pymysql.connect( host = 'sqlserver.example.com', port=3306, passwd='SECRET12345', user = 'dbuser', db='myDatabase')
If your port is not 3306, and you are not sure which port your mysql is in, then to find the port, do the following:
mysql> select * from INFORMATION_SCHEMA.GLOBAL_VARIABLES WHERE VARIABLE_NAME LIKE 'PORT';
+---------------+----------------+
| VARIABLE_NAME | VARIABLE_VALUE |
+---------------+----------------+
| PORT | 2127 |
+---------------+----------------+
1 row in set (0.00 sec)

Related

Python API - Influxdb - When I tried to connect to database and fetch the list of existing db, influxdb.exceptions.InfluxDBClientError: 401:

>>> from influxdb import InfluxDBClient
>>> from datetime import datetime
>>> client = InfluxDBClient('localhost', 8086, 'root', '<Password>')
>>> client.get_list_database()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/<USername>/.local/lib/python3.6/site-packages/influxdb/client.py", line 704, in get_list_database
return list(self.query("SHOW DATABASES").get_points())
File "/home/<USername>/.local/lib/python3.6/site-packages/influxdb/client.py", line 527, in query
expected_response_code=expected_response_code
File "/home/<USername>/.local/lib/python3.6/site-packages/influxdb/client.py", line 378, in request
raise InfluxDBClientError(err_msg, response.status_code)
influxdb.exceptions.InfluxDBClientError: 401: {"code":"unauthorized","message":"Unauthorized"}
Can someone please guide me what I'm doing wrong here.
It seems that, this issue very clear, according to what you wrote as your instructions to connect to InfluxDB and in this line:
client = InfluxDBClient('localhost', 8086, 'root', '<Password>')
It seems that you used a sample command to connect to your database(I know it from something like '<Password>' as your password to connect to database), and because of this you get 401 error which means your credentials is not true, write these credentials yourself and which are True for your InfluxDB.

impala.error.HiveServer2Error: Failed after retrying 3 times

I use impyla and ibis to connect hive server, but I got the error.
I tried the following code:
from impala.dbapi import connect
impcur = connect(host="kudu3", port=10000, database="yingda_test", password=None, user='admin', kerberos_service_name='None').cursor()
The new error came out:
Traceback (most recent call last):
File "/Users/edy/src/PythonProjects/dt-center-algorithm/test/1.py", line 4, in <module>
impcur = connect(host="kudu3", port=10000, database="yingda_test", password=None, user='admin', kerberos_service_name='None').cursor()
File "/usr/local/conda3/envs/py37/lib/python3.7/site-packages/impala/hiveserver2.py", line 129, in cursor
session = self.service.open_session(user, configuration)
File "/usr/local/conda3/envs/py37/lib/python3.7/site-packages/impala/hiveserver2.py", line 1187, in open_session
resp = self._rpc('OpenSession', req, True)
File "/usr/local/conda3/envs/py37/lib/python3.7/site-packages/impala/hiveserver2.py", line 1080, in _rpc
response = self._execute(func_name, request, retry_on_http_error)
File "/usr/local/conda3/envs/py37/lib/python3.7/site-packages/impala/hiveserver2.py", line 1142, in _execute
.format(self.retries))
impala.error.HiveServer2Error: Failed after retrying 3 times
thrift 0.15.0
thrift-sasl 0.4.3
thriftpy2 0.4.14
pure-sasl 0.6.2
sasl 0.2.1
thrift-sasl 0.4.3
ibis-framework 2.0.0
impyla 0.17.0
python version: 3.7.12 with anaconda
And I have tried ibis-1.3.0 and 2.0 version. Can u guys give some advices? tks a lot
I also met this problem.
My code is:
from impala.dbapi import connect
import psycopg2
conn_hive = connect(host="xxx.xxx.xxx.xxx", port=xxx, user='admin',
password='password', database='xxx', auth_mechanism="PLAIN", timeout=6000)
hive_cursor = conn_hive.cursor()
hive_cursor.execute(query_sql)
data_list = hive_cursor.fetchall()
...get data...
hive_cursor.close()
conn_hive.close()
After my colleagues and I tried, we find it will be successful when we reconnect hive manually.
It means if you want to get different data from the same hive database, you had better close the connection and reconnect hive manually by the following codes:
conn_hive = connect(host="xxx.xxx.xxx.xxx", port=xxx, user='admin',
password='password', database='xxx', auth_mechanism="PLAIN", timeout=6000)
hive_cursor = conn_hive.cursor()
hive_cursor.execute(query_sql)
data_list = hive_cursor.fetchall()
...get data1...
hive_cursor.close()
conn_hive.close()
conn_hive = connect(host="xxx.xxx.xxx.xxx", port=xxx, user='admin',
password='password', database='xxx', auth_mechanism="PLAIN", timeout=6000)
hive_cursor = conn_hive.cursor()
hive_cursor.execute(query_sql)
data_list = hive_cursor.fetchall()
...get data2...
hive_cursor.close()
conn_hive.close()
Finally, my colleagues told me that there was a problem with impala recently.

Airflow instance does not connect to edge-node server through SFTPsensor (SSH connection type)

My goal is to make an Airflow DAG check if a file exists in a directory inside a different server (in this case, an edge-node from a cluster).
My first approach was to make a SSHOperator which triggered a bash script (in the edge-node server) that checks if the directory is empty. This worked. I was able to receive the output from the bash script in the DAG logs telling me if the dir is empty or not. However, when the SSHOperator fails (ie, the script did not found a file in the dir) the current dag run is interrupted and a new dag run starts. If this happens multiple times (which is expected) I will end up with a tonne of interrupted dag runs in the tree view =/
So, my second approach is to use a proper sensor. In this case, the SFTPSensor seems to be the best option.
So here is my python DAG code:
from airflow import DAG
from datetime import timedelta, datetime
from airflow.utils.dates import days_ago
from airflow.models import Variable
import requests
import logging
import time
from airflow.contrib.sensors.sftp_sensor import SFTPSensor
from airflow.operators.python_operator import PythonOperator
def say_bye(**context):
print("byebyeeee!")
default_args = {
'owner': 'airflow',
"start_date": days_ago(1),
}
ssh_id = Variable.get("ssh_connection_id_imb")
source_path = "/trf/cq/millennium/rcp/"
dag = DAG(dag_id='ing_cgd_millennium_t_ukajrnl_imb_test4', default_args=default_args, schedule_interval=None)
with dag:
s0 = SFTPSensor(
task_id='sensing_task',
path=source_path,
fs_conn_id=ssh_id,
poke_interval=60,
mode='reschedule',
retries=1
)
t1 = PythonOperator(task_id='run_this_goodbye',python_callable=say_bye,provide_context=True)
s0 >> t1
My SSH connection (ssh_connection_id_imb) looks like this: https://i.stack.imgur.com/x7iLu.png
And the error:
[2021-03-09 11:56:07,662] {base_hook.py:89} INFO - Using connection to: id: sftp_default. Host: localhost, Port: 22, Schema: None, Login: airflow, Password: None, extra: XXXXXXXX
[2021-03-09 11:56:07,664] {base_hook.py:89} INFO - Using connection to: id: sftp_default. Host: localhost, Port: 22, Schema: None, Login: airflow, Password: None, extra: XXXXXXXX
[2021-03-09 11:56:07,665] {sftp_sensor.py:46} INFO - Poking for lpc600.group.com:/trf/cq/millenium/rcp/C.PGMLNGL.FKM001.041212.20201123.gz
[2021-03-09 11:56:07,665] {logging_mixin.py:112} WARNING - /opt/miniconda/lib/python3.7/site-packages/pysftp/__init__.py:61: UserWarning: Failed to load HostKeys from /root/.ssh/known_hosts. You will need to explicitly load HostKeys (cnopts.hostkeys.load(filename)) or disableHostKey checking (cnopts.hostkeys = None).
warnings.warn(wmsg, UserWarning)
[2021-03-09 11:56:07,666] {taskinstance.py:1150} ERROR - Unable to connect to localhost: [Errno 101] Network is unreachable
Traceback (most recent call last):
File "/opt/miniconda/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 984, in _run_raw_task
result = task_copy.execute(context=context)
File "/opt/miniconda/lib/python3.7/site-packages/airflow/sensors/base_sensor_operator.py", line 107, in execute
while not self.poke(context):
File "/opt/miniconda/lib/python3.7/site-packages/airflow/contrib/sensors/sftp_sensor.py", line 48, in poke
self.hook.get_mod_time(self.path)
File "/opt/miniconda/lib/python3.7/site-packages/airflow/contrib/hooks/sftp_hook.py", line 219, in get_mod_time
conn = self.get_conn()
File "/opt/miniconda/lib/python3.7/site-packages/airflow/contrib/hooks/sftp_hook.py", line 114, in get_conn
self.conn = pysftp.Connection(**conn_params)
File "/opt/miniconda/lib/python3.7/site-packages/pysftp/__init__.py", line 140, in __init__
self._start_transport(host, port)
File "/opt/miniconda/lib/python3.7/site-packages/pysftp/__init__.py", line 176, in _start_transport
self._transport = paramiko.Transport((host, port))
File "/opt/miniconda/lib/python3.7/site-packages/paramiko/transport.py", line 416, in __init__
"Unable to connect to {}: {}".format(hostname, reason)
paramiko.ssh_exception.SSHException: Unable to connect to localhost: [Errno 101] Network is unreachable
I noticed that the base_hook is pointing to localhost and the sftp_sensor is pointing to the correct server.... do I need to set up the base hook?? Am I missing a step?? Thanks for the help! =)
Just realized my errors...
Problem #1 bad sftp_connection name:
s0 = SFTPSensor(
task_id='sensing_task',
path=source_path,
sftp_conn_id=ssh_id, # instead of fs_conn_id
poke_interval=60,
mode='reschedule',
retries=1
)
Problem #2 Extra field needs to be defined in the connection
I created a public key and add this to the Extra field:
{"key_file": "/airflow/generated_sshkey_dir/id_rsa.pub", "no_host_key_check": true}
Sooo, this makes my connection prone to a man in the middle attack, since I'm not checking for the host key. In my case, this solution is sufficient.

pymysql stopped working : NameError: name 'byte2int' is not defined

I am using pymysql in Python to connect to database. It was working fine but now I am getting following error :
Traceback (most recent call last) :
File "/Users/njethani/Desktop/venv/lib/python3.6/site-packages/pymysql/__init__.py", line 94, in Connect
return Connection(*args, **kwargs)
File "/Users/njethani/Desktop/venv/lib/python3.6/site-packages/pymysql/connections.py", line 327, in __init__
self.connect()
File "/Users/njethani/Desktop/venv/lib/python3.6/site-packages/pymysql/connections.py", line 598, in connect
self._request_authentication()
File "/Users/njethani/Desktop/venv/lib/python3.6/site-packages/pymysql/connections.py", line 865, in _request_authentication
data = _auth.scramble_old_password(self.password, self.salt) + b'\0'
File "/Users/njethani/Desktop/venv/lib/python3.6/site-packages/pymysql/_auth.py", line 72, in scramble_old_password
hash_pass = _hash_password_323(password)
File "/Users/njethani/Desktop/venv/lib/python3.6/site-packages/pymysql/_auth.py", line 97, in _hash_password_323
for c in [byte2int(x) for x in password if x not in (' ', '\t', 32, 9)]:
File "/Users/njethani/Desktop/venv/lib/python3.6/site-packages/pymysql/_auth.py", line 97, in <listcomp>
for c in [byte2int(x) for x in password if x not in (' ', '\t', 32, 9)]:
NameError: name 'byte2int' is not defined
I am using following lines to connect to my database (connection string) :
conn = pymysql.Connect(host='hostname', port=3306, user='username', passwd='password', db='mysql')
Since the pymysql maintainer refuses to release the fix, the solution is simply to install an older version of the package:
pip3 install --user 'pymysql<0.9'
I got the same error and looked like _auth.py was unable to find the reference of byte2int. I added the below line to modify _auth.py to make it work.
from util import byte2int,int2byte
Please be aware the util.py is that of pymysql as there will be util.py files in other packages as well.

Connect to Mongo Server from Python Flask Application

I'm a beginner at python. I'm trying to connect to Mongo DB server from my python flask application. But I couldn't able to run the application as I'm facing the below problem.
from flask import Flask, render_template
from flask_pymongo import PyMongo
from pymongo import MongoClient # Database connector
app=Flask(__name__)
app.config["MONGO_DBNAME"]="connect_to_pymon"
app.config["MONGO_URI"]="mongodb://mongo_test:mongo_test#123#ds129821.mlab.com:29821/connect_to_pymon"
mongo = PyMongo(app)
#app.route("/add")
def add():
user = mongo.db.users
user.insert[{"name":"kishor"}]
return "users added!"
if __name__=='__main__':
app.run(debug=True, port=8080)
This is my source code. When I execute this I'm getting the following error which I couldn't able to trace it.
Traceback (most recent call last):
File "C:/Users/kravi/PycharmProjects/GitUploadTest/mongoTest.py", line 9, in <module>
mongo = PyMongo(app)
File "C:\Kishor\Training\My_Workspace\Python_Basics\TaskCRUD\venv\lib\site-packages\flask_pymongo\__init__.py", line 116, in __init__
self.init_app(app, uri, *args, **kwargs)
File "C:\Kishor\Training\My_Workspace\Python_Basics\TaskCRUD\venv\lib\site-packages\flask_pymongo\__init__.py", line 149, in init_app
parsed_uri = uri_parser.parse_uri(uri)
File "C:\Kishor\Training\My_Workspace\Python_Basics\TaskCRUD\venv\lib\site-packages\pymongo\uri_parser.py", line 379, in parse_uri
user, passwd = parse_userinfo(userinfo)
File "C:\Kishor\Training\My_Workspace\Python_Basics\TaskCRUD\venv\lib\site-packages\pymongo\uri_parser.py", line 97, in parse_userinfo
"RFC 3986, use %s()." % quote_fn)
pymongo.errors.InvalidURI: Username and password must be escaped according to RFC 3986, use urllib.parse.quote_plus().
Process finished with exit code 1
Note: I haven't installed mongoDb in my PC. I'm using mongo db as a service
you can connect to your db using PyMongo as follows:
from pymongo import MongoClient
client = MongoClient("mongodb://mongo_test:mongo_test#123#ds129821.mlab.com:29821/connect_to_pymon")
db = client["dbname"] # connect_to_pymon in your case
I understood that using special characters like "#" in passwords doesn't make sense as we have to configure mongo URI which has a specific format with symbol # in it. I changed the database password and tried, after which it worked.!

Resources