I want to store my aws iot mqtt messages into my postgresql. To do so, I have already connected my local posrtgresql to the amazon RDS instance. Now, I need to create a connection between amazon lambda calculus and then send the data to the postgresql database. But whenever, I am testing my lambda calculus, it was giving me "name 'conn' is not defined: NameError" error. Here, is my python code in aws lambda. I have also included the psycopg2 library to my project.
import sys
import logging
import rds_config
import psycopg2
#rds settings
rds_host = "myhost"
name = "username"
password = "username_password"
db_name = "dbname"
logger = logging.getLogger()
logger.setLevel(logging.INFO)
try:
conn = psycopg2.connect(host=rds_host, user=name, password=password,
dbname=db_name, connect_timeout=5)
except:
logger.error("ERROR: Unexpected error: Could not connect to postgreSQL
instance.")
logger.info("SUCCESS: Connection to RDS postgreSQL instance succeeded")
def handler(event, context):
"""
This function fetches content from postgreSQL RDS instance
"""
item_count = 0
with conn.cursor() as cur:
cur.execute('insert into awsiotdata (serialnumber, dateandtime, clicktype, batteryvoltage) values(serialNumber, datetime.datetime.utcnow(), clickType, batteryVoltage)')
conn.commit()
cur.execute("select * from awsiotdata")
for row in cur:
item_count += 1
logger.info(row)
#print(row)
return "Added %d items from RDS PostgreSQL table" %(item_count)
You are hiding a true error message. Exception handling patter for Python looks like this:
try:
conn = psycopg2.connect(host=rds_host,
user=name,
password=password,
database=db_name)
except Exception as e:
print(e)
This way you will see the real error message:
invalid dsn: invalid connection option "passwd"
Edit #1:
"Timeout" means that lambda can't connect because of "Security group rules" for RDS instance. Please keep in mind that even public RDS instance by default have inbound restriction by IP (i.e. it is posible to connect from PC but it is imposible to connect from AWS Lambda).
Related
requirements.txt
click==8.1.3
Flask==2.2.2
itsdangerous==2.1.2
Jinja2==3.1.2
MarkupSafe==2.1.2
pyodbc==4.0.35
Werkzeug==2.2.2
app.py
import pyodbc
from flask import Flask, render_template
#def get_db_connect():
# conn = pyodbc.connect('Driver={ODBC Driver 18 for SQL Server};Server=tcp:servername.database.windows.net,1433;Database=Dev-testing;Uid=username;Pwd={supersecurepassword};Encrypt=yes;TrustServerCertificate=no;Connection Timeout=30;')
# return conn
app = Flask(__name__)
#app.route('/')
def index():
# conn = get_db_connect()
# assets = conn.execute('SELECT * FROM chosen_table').fetchall()
# conn.close()
return render_template('index.html')
If I comment out the import it produces the base page and works. But having that import causes the container to crash. Any help would be greatly appreciated.
I'm needing to establish a DB connection to an Azure SQL instance. I have tried to follow tutorials but nothing seems to work.
Firstly, I have installed pyodbc in my local
pip install pyodbc
I'm needing to establish a DB connection to an Azure SQL instance. I have tried to follow tutorials but nothing seems to work.
According to the above comment I created Azure SQL Database in portal.
Then, I created a python application to connect Azure SQL server DB by importing pyodbc
from flask import Flask, render_template
import pyodbc
app = Flask(__name__)
server = 'tcp:***.database.windows.net'
database = '****'
username = '****'
password = '*****'
cnxn = pyodbc.connect('DRIVER={ODBC Driver 18 for SQL Server};SERVER='+server+';DATABASE='+database+';ENCRYPT=yes;UID='+username+';PWD='+ password)
cursor = cnxn.cursor()
#app.route('/')
def index():
cursor.execute("select current_timestamp;")
row = cursor.fetchone()
return 'Current Timestamp: ' + str(row[0])
if __name__ == '__main__':
app.run()
I Started running my application in local, Its working fine in local.
Then I deployed my application in Azure web app.
After deploying to Azure Web App also Its running fine.
I'm setting up a cron job, where it fetches some data from MongoDB atlas in Python3 through Pymongo in Cpanel. I always get a Error 111 Connnection refused.
I using python3.6 and pymongo-3.9.0, Cloud MongoDB-4.0.2
I have tried via SSHtunnel forwarder, but not sure how to give host_ip_addres, where MongoDB is in cluster
class DbConnection():
def __init__(self):
self.connectionServer = "mongodb+srv://"
self.userName = "name"
self.userPass = "pass"
self.connectionCluster = "#temp-cluster0-lt2rb.mongodb.net"
self.connectionDb = "developmentDB"
def db_connect(self):
''' This function is used to connect to remote db with authentication
Return type --> returns the url string of the db
parameters--> self
'''
try:
connectionUrl = self.connectionServer + self.userName + ":" + self.userPass + self.connectionCluster + "/test?retryWrites=true&w=majority"
print(connectionUrl)
myClient = pymongo.MongoClient(connectionUrl, port=12312)
db = myClient.test
print(myClient.test)
I'm expecting it to connect to the MongoDB cluster DB and read/Write the documents through it.
So the comments for the question solved the issue, so i'll just put this here for any future reference
The error can be resulted for must people because of a miss-configuration when moving from a self hosted MongoDB to a Remoted host service (Like MongoDB Atlas)
So, the +srv is DNS Seed List Connection Format.
When switching from port based connection to DNS seed based connection, we should remove any port configuration from our connection string, i.e:
class DbConnection():
def __init__(self):
self.connectionServer = "mongodb+srv://"
self.userName = "name"
self.userPass = "pass"
self.connectionCluster = "#temp-cluster0-lt2rb.mongodb.net"
self.connectionDb = "developmentDB"
def db_connect(self):
''' This function is used to connect to remote db with authentication
Return type --> returns the url string of the db
parameters--> self
'''
try:
connectionUrl = self.connectionServer + self.userName + ":" + self.userPass + self.connectionCluster + "/test?retryWrites=true&w=majority"
print(connectionUrl)
// myClient = pymongo.MongoClient(connectionUrl, port=12312)
// We remove the PORT
myClient = pymongo.MongoClient(connectionUrl)
db = myClient.test
print(myClient.test)
I have a database url that looks like this:
jdbc:redshift://<database_name>.company.com:5439/<database_name>?ssl=true&sslfactory=org.postgresql.ssl.NonValidatingFactory
How do I connect to this jdbc url using python? What is a jdbc url anyway? Can I connect to this using:
import psycopg2
con=psycopg2.connect(
dbname= 'jdbc:redshift://<database_name>.<company>.com:5439/<database_name>?ssl=true&sslfactory=org.postgresql.ssl.NonValidatingFactory',
host='host',
port= '5439',
user= 'user',
password= 'pwd'
)
I am using a better way of connecting to Redshift via Python.
Please Follow the steps -
Create a IAM Policy for get credentials - DOCUMENTATION
Where to Attach this policy ? -
a. Run the Python Code on EC2 or any other service -> Attach the IAM Policy to a role and attach it to that particular service or IAM Role.
b. Local Machine -> Attach to the AWS User which you have configured on your local system (via aws configure CLI Command and by providing Access Key and Secret Access Key )
Lets Use a Config.ini ( as a central Place to store any static values ) -
My Redshift JDBC URL is like -
jdbc:redshift://dev.<some_value_like_company>.us-west-2.redshift.amazonaws.com:5439/dev_database
My Config.ini File is like -
[Redshift]
port = 5439
username = dev_user
database_name = dev_database
cluster_id = dev
url = dev.<some_value_like_company>.<region>.redshift.amazonaws.com
region = us-west-2
Create a connection -
#All Imports
import logging
import psycopg2
import boto3
import ConfigParser
def db_connection():
logger = logging.getLogger(__name__)
parser = ConfigParser.ConfigParser()
parser.read('config.ini')
RS_PORT = parser.get('Redshift','port')
RS_USER = parser.get('Redshift','username')
DATABASE = parser.get('Redshift','database_name')
CLUSTER_ID = parser.get('Redshift','cluster_id')
RS_HOST = parser.get('Redshift','url')
REGION_NAME = parser.get('Redshift','region')
client = boto3.client('redshift',region_name=REGION_NAME)
cluster_creds = client.get_cluster_credentials(DbUser=RS_USER,
DbName=DATABASE,
ClusterIdentifier=CLUSTER_ID,
AutoCreate=False)
try:
conn = psycopg2.connect(
host=RS_HOST,
port=RS_PORT,
user=cluster_creds['DbUser'],
password=cluster_creds['DbPassword'],
database=DATABASE
)
print "pass"
print conn
return conn
except psycopg2.Error:
logger.exception('Failed to open database connection.')
print "Failed"
db_connection()
Import and Call the function where-ever necessary.
I would prefer the above instead of hard-coding the values for UserName and Password for any user, because -
its simply not a good practice,
Besides if you use a public Repo (github), then it makes the username & password public which might be a nightmare if someone uses it for wrong reasons.
Using IAM is Free and Secured :p.
Do let me know if this helps, If you still need to connect to Redshift the way you wanted will post an answer later after trying it out myself.
Sample IAM Policy for Get_credentials -
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"redshift:GetClusterCredentials",
"redshift:CreateClusterUser",
"redshift:JoinGroup"
],
"Resource": [
"arn:aws:redshift:us-west-2:<account_number>:dbname:dev/dev_database",
"arn:aws:redshift:us-west-2:<account_number>:dbuser:dev/dev",
"arn:aws:redshift:us-west-2:<account_number>:dbuser:dev/dev_read"
]
}
]
}
I'm developing a Flask app, with MySQL (flask-mysqldb) and MQTT (flask-mqtt) integrations. I can perform any DB operation from a Flask method (e.g. #app.route('/')), but if I try to do it from a MQTT method when I receive a message (e.g. #mqtt.on_message()) it does nothing. This last method works perfectly because it receives and shows in log the message received.
I have a method that performs DB operations, and depending on where I call it from, it works or not. I guess it should be because of the MySQL object, but I don't know exactly.
Here is the code I'm using (just the problem):
#mqtt.on_message()
def handle_mqtt_message(client, userdata, message):
print('New message {}'.format(message.payload.decode()))
storeDB('test') #Here it doesn't work
################## Methods ###########################
def storeDB(param_text):
cur = mysql.connection.cursor()
cur.execute(
'INSERT INTO contacts (fullname, phone, email) VALUES (%s,%s,%s)', (param_text, param_text, param_text))
mysql.connection.commit()
###################### FLASK #########################
#app.route('/')
def index():
storeDB('temp') #Here it works
return 'Hello World'
If I access to localhost it shows the "Hello World" text in browser and updates the DB; otherwise, if I receive a MQTT message, it is shown on terminal but not updated the DB.
Thanks.
This is how I have it working using the MySQLdbpackage:
import MySQLdb
try:
db = MySQLdb.connect("HOST", "USER", "PASS", "DB")
except:
print("Problem creating DB connection")
sys.exit()
cursor = db.cursor()
def storeDB(param_text1, param_text2, param_text3):
query = """INSERT INTO `DB`.`TABLE` (`fullname`, `phone`, `email`) VALUES ('""" + \
param_text1+"""','"""+param_text2+"""','"""+param_text3+"""');"""
try:
cursor.execute(query)
db.commit()
print('DB updated')
except:
db.rollback()
print("Problem updating DB :(")
I have an existing postgres table in RDS with a database name my-rds-table-name
I've connected to it using pgAdmin4 with the following configs of a read-only user:
host_name = "my-rds-table-name.123456.us-east-1.rds.amazonaws.com"
user_name = "my_user_name"
password = "abc123def345"
I have verified that I can query against the table.
However, I cannot connect to it using python:
SQLAlchemy==1.2.16
psycopg2-binary==2.7.6.1
mysqlclient==1.4.1
With:
import psycopg2
engine = psycopg2.connect(
database="my-rds-table-name",
user="my_user_name",
password="abc123def345",
host="my-rds-table-name.123456.us-east-1.rds.amazonaws.com",
port='5432'
)
It fails with
psycopg2.OperationalError: FATAL: database "my-rds-table-name" does not exist
Similarly, if I try to connect to it with sqlalchemy:
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) FATAL: database "my-rds-table-name" does not exist
What am I missing?
Thank's John Rotenstein for your comment.
As he pointed out, my-rds-table-name is the database instance name, not the database name, the default database name is postgres.
import psycopg2
engine = psycopg2.connect(
database="postgres",
user="my_user_name",
password="abc123def345",
host="my-rds-table-name.123456.us-east-1.rds.amazonaws.com",
port='5432'
)
Using sqlalchemy you can do the following:
engine = create_engine('postgresql://postgres:postgres#<AWS_RDS_end-point>:5432/postgres')
Then you can update your database.
For example:
df = pd.DataFrame({'A': [1,2], 'B':[3,4]})
df.to_sql('tablename', engine, schema='public', if_exists='append', index=False)