VSCodium Python Debugging - python-3.x

I am currently learning how to hookup mariadb using python.
Developing on Parrot OS.
I setup a virtual environment and also pip install mariadb
Using the following simple file (setupTest.py)
# Module Imports
import mariadb
import sys
# Connect to MariaDB Platform
try:
conn = mariadb.connect(
user="user",
password="password",
host="localhost",
port=3306,
database="test_db"
)
except mariadb.Error as e:
print(f"Error connecting to MariaDB Platform: {e}")
sys.exit(1)
# Get Cursor
cur = conn.cursor()
When I run : python setupTest.py the files executes with no issues, can even pull some test data from my test_db database.
My issue is when I try and 'Run and Debug' using VSCodium I get the following error (notice custom command VSCodium runs for debug vs the one I used) :
Any help will be appreicated.
Thanks!

Related

Connecting to postgresql from python 3, running in Cloud Shell: password authentication failed

I try to run locally (from GCP terminal) python 3 tutorial program to connect to my postgresql dsatabase.
I run proxy, as it is suggested in source:
./cloud_sql_proxy -instances=xxxxxxxx:us-central1:testpg=tcp:5432
it works, I can connect to it with:
psql "host=127.0.0.1 sslmode=disable dbname=guestbook user=postgres
Unfortunately when I try to connect from python:
cnx = psycopg2.connect(dbname=db_name, user=db_user,
password=db_password, host=host)
host is 121.0.0.1 -as I run it locally, I get this error:
psycopg2.OperationalError: connection to server at "127.0.0.1", port 5432 failed: FATAL: password authentication failed for user "postgres"
I can't get around what I miss?
Thanks in advance ...
I'd recommend using the Cloud SQL Python Connector to manage your connections and best of all you won't need to worry about running the proxy manually. It supports the pg8000 postgresql driver and can run from Cloud Shell.
Here is an example code snippet showing how to use it:
from google.cloud.sql.connector import connector
import sqlalchemy
# configure Cloud SQL Python Connector properties
def getconn() ->:
conn = connector.connect(
"xxxxxxxx:us-central1:testpg",
"pg8000",
user="YOUR_USER",
password="YOUR_PASSWORD",
db="YOUR_DB"
)
return conn
# create connection pool to re-use connections
pool = sqlalchemy.create_engine(
"postgresql+pg8000://",
creator=getconn,
)
# query or insert into Cloud SQL database
with pool.connect() as db_conn:
# query database
result = db_conn.execute("SELECT * from my_table").fetchall()
# Do something with the results
for row in result:
print(row)
For more detailed examples refer to the README of the repository.

How to pass dynamic inputs for a command on windows remote machine using python pypsexec.client library

I am executing commands on windows remote machine using python pypsexec.client library from a local windows machine where python script is written.
Below is the command that I am trying to execute from the python script:
How do I pass name and details parameters form the script?
Below is the my script:
from pypsexec.client import Client
ip = '10.X.X.X'
try:
conn = Client(ip, 'administrator', 'password', encrypt='False')
conn.connect()
conn.create_service()
print('service created for following "{}".......\n\n'.format(ip))
callback = r"""C:\Progra~1\Nimsoft\bin\pu -u administrator -p password
/CHOSERVER1_domain/CHOSERVER1_hub/CHOSERVER1/hub getrobots"""
stdout, stderr, rc = conn.run_executable('cmd.exe',arguments='''/c
{}'''.format(callback),stdin=None)
stdout = str(stdout, 'utf-8');stderr = str(stderr, 'utf-8')
print(stdout)
except Exception as e:
print('Below exception occured .....\n')
print(e)
print()
While running the above script terminal is waiting for the name,details parameters which has mentioned on screenshot.
Any help would be appreciated. Thank you.

AttributeError: module 'rethinkdb' has no attribute 'connect'

I am using the RethinkDB in my Jupyter Python Notebook.
I have installed the RethinkDB server for my Windows setup.
And server version is
C:\Users\rethinkdb-2.3.6>rethinkdb --version
rethinkdb 2.3.6-windows (MSC 190024215)
On the client side, I see the rethinkDB is : rethinkdb==2.4.2.post1.
So when use my python code to connect to the DB which i have already started on my windows server, it gives the error as:
=>self.conn = r.connect('localhost', 28015)
AttributeError: module 'rethinkdb' has no attribute 'connect'
I have seen some earlier posts where there are comments that the way we connect to RethinkDB has changed from 2.4.X and I already tried the code options below but they did not help:
import rethinkdb as rdb
r = rdb.RethinkDB()
self.conn = r.connect('localhost', 28015)
try this,
from rethinkdb import RethinkDB
r = RethinkDB()
then try connecting it like this,
r.connect( "localhost", 28015).repl()

Why am I getting : Unable to import module 'handler': No module named 'paramiko'?

I was in the need to move files with a aws-lambda from a SFTP server to my AWS account,
then I've found this article:
https://aws.amazon.com/blogs/compute/scheduling-ssh-jobs-using-aws-lambda/
Talking about paramiko as a SSHclient candidate to move files over ssh.
Then I've written this calss wrapper in python to be used from my serverless handler file:
import paramiko
import sys
class FTPClient(object):
def __init__(self, hostname, username, password):
"""
creates ftp connection
Args:
hostname (string): endpoint of the ftp server
username (string): username for logging in on the ftp server
password (string): password for logging in on the ftp server
"""
try:
self._host = hostname
self._port = 22
#lets you save results of the download into a log file.
#paramiko.util.log_to_file("path/to/log/file.txt")
self._sftpTransport = paramiko.Transport((self._host, self._port))
self._sftpTransport.connect(username=username, password=password)
self._sftp = paramiko.SFTPClient.from_transport(self._sftpTransport)
except:
print ("Unexpected error" , sys.exc_info())
raise
def get(self, sftpPath):
"""
creates ftp connection
Args:
sftpPath = "path/to/file/on/sftp/to/be/downloaded"
"""
localPath="/tmp/temp-download.txt"
self._sftp.get(sftpPath, localPath)
self._sftp.close()
tmpfile = open(localPath, 'r')
return tmpfile.read()
def close(self):
self._sftpTransport.close()
On my local machine it works as expected (test.py):
import ftp_client
sftp = ftp_client.FTPClient(
"host",
"myuser",
"password")
file = sftp.get('/testFile.txt')
print(file)
But when I deploy it with serverless and run the handler.py function (same as the test.py above) I get back the error:
Unable to import module 'handler': No module named 'paramiko'
Looks like the deploy is unable to import paramiko (by the article above it seems like it should be available for lambda python 3 on AWS) isn't it?
If not what's the best practice for this case? Should I include the library into my local project and package/deploy it to aws?
A comprehensive guide tutorial exists at :
https://serverless.com/blog/serverless-python-packaging/
Using the serverless-python-requirements package
as serverless node plugin.
Creating a virtual env and Docker Deamon will be required to packup your serverless project before deploying on AWS lambda
In the case you use
custom:
pythonRequirements:
zip: true
in your serverless.yml, you have to use this code snippet at the start of your handler
try:
import unzip_requirements
except ImportError:
pass
all details possible to find in Serverless Python Requirements documentation
You have to create a virtualenv, install your dependencies and then zip all files under sites-packages/
sudo pip install virtualenv
virtualenv -p python3 myvirtualenv
source myvirtualenv/bin/activate
pip install paramiko
cp handler.py myvirtualenv/lib/python
zip -r myvirtualenv/lib/python3.6/site-packages/ -O package.zip
then upload package.zip to lambda
You have to provide all dependencies that are not installed in AWS' Python runtime.
Take a look at Step 7 in the tutorial. Looks like he is adding the dependencies from the virtual environment to the zip file. So I'd assume your ZIP file to contain the following:
your worker_function.py on top level
a folder paramico with the files installed in virtual env
Please let me know if this helps.
I tried various blogs and guides like:
web scraping with lambda
AWS Layers for Pandas
spending hours of trying out things. Facing SIZE issues like that or being unable to import modules etc.
.. and I nearly reached the end (that is to invoke LOCALLY my handler function), but then my function even though it was fully deployed correctly and even invoked LOCALLY with no problems, then it was impossible to invoke it on AWS.
The most comprehensive and best by far guide or example that is ACTUALLY working is the above mentioned by #koalaok ! Thanks buddy!
actual link

Importing pyodbc results as Internal server error in Apache HTTP Server

Executing the cmd as c:\>pip install pyodbc
"pyodbc.cp36-win32.pyd" file will be created
Collecting pyodbc Using cached
pyodbc-4.0.21-cp36-cp36m-win32.whl Installing collected packages:
pyodbc Successfully installed pyodbc-4.0.21
When I try to run in Apache24 Server the below code results in Internal Server Error
import pyodbc
cnxn = pyodbc.connect("Driver={ODBC Driver 13 for SQL Server};"
"Server=DESKTOP;"
"Database=demo2017;"
"Trusted_Connection=yes;")
cursor = cnxn.cursor()
cursor.execute('SELECT * FROM Table')
for row in cursor:
print('row = %r' % (row,))
Running in python shell as
C:\Apache24\htdocs>python mssql_odbc.py Results will be displayed fine. But not at apache http server.
In httpd.conf file:
LoadModule pyodbc_module "c:/users/desktop/appdata/local/programs/python/python36-32/lib/site-packages/pyodbc.cp36-win32.pyd"
Results
httpd: Syntax error on line 571 of C:/Apache24/conf/httpd.conf: Can't
locate API module structure `pyodbc_module' in file
C:/Users/Desktop/AppData/Local/Programs/Python/Python36-32/Lib/site-packages/pyodbc.cp36-win32.pyd:
No error
So are there any modules or code that should be imported/modified?
In Apache 500 Internal Error Solved. Because of importing pypyodbc instead of import pyodbc.
In python shell I was able to connect successfully retrive the results in python shell as well as in Database.

Resources