I am using HAProxy to for AWS RDS (MySQL) load balancing for my app, that is written using Flask.
The HAProxy.cfg file has following configuration for the DB
listen mysql
bind 127.0.0.1:3306
mode tcp
balance roundrobin
option mysql-check user haproxy_check
option log-health-checks
server db01 MASTER_DATABSE_ENDPOINT.rds.amazonaws.com
server db02 READ_REPLICA_ENDPOINT.rds.amazonaws.com
I am using SQLALCHEMY and it's URI is:
SQLALCHEMY_DATABASE_URI = 'mysql+pymysql://USER:PASSWORD#127.0.0.1:3306/DATABASE'
but when I am running an API in my test environment, the APIs that are just reading stuff from DB are executing just fine but the APIs that are writing something to DB are giving me errors mostly that:
(pymysql.err.InternalError) (1290, 'The MySQL server is running with the --read-only option so it cannot execute this statement')
I think I need to use 2 URLs now in this scenario, one for read-only operation and one for writes.
How does this work with Flask and SQLALCHEMY with HAProxy?
How do I tell my APP to use one URL for write operations and other HAProxy URL to read-only operations?
I didn't find any help from the documentation of SQLAlchemy.
Binds
Flask-SQLAlchemy can easily connect to multiple databases. To achieve
that it preconfigures SQLAlchemy to support multiple “binds”.
SQLALCHEMY_DATABASE_URI = 'mysql+pymysql://USER:PASSWORD#DEFAULT:3306/DATABASE'
SQLALCHEMY_BINDS = {
'master': 'mysql+pymysql://USER:PASSWORD#MASTER_DATABSE_ENDPOINT:3306/DATABASE',
'read': 'mysql+pymysql://USER:PASSWORD#READ_REPLICA_ENDPOINT:3306/DATABASE'
}
Referring to Binds:
db.create_all(bind='read') # from read only
db.create_all(bind='master') # from master
Related
How to get the server name in Python Django? we hosted Django application in AWS - Apache machine which have more than 1 server and increased based on load.
Now i need to find from which server the particular request is made and from which server we got the response.
I am using Python 3.6, Django,Django restframework, Apache server on AWS machine.
I assume by server name you mean hostname,
to get your hostname you can use
import socket
socket.gethostname()
https://docs.python.org/3/library/socket.html#socket.gethostname
or you can query your instance metadata, here you'll get a richer result such as
ami-id
ami-launch-index
ami-manifest-path
block-device-mapping/
events/
hostname
iam/
instance-action
instance-id
instance-type
local-hostname
local-ipv4
mac
metrics/
network/
placement/
profile
public-hostname
public-ipv4
public-keys/
reservation-id
security-groups
services/
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html
My opinion is try to use instance-id
I have a MongoDB instance running on Kubernetes and I'm trying to connect to it using Python with the Kubernetes library.
I'm connecting to the context on cmd line using:
kubectl config use-context CONTEXTNAME
With Python, I'm using:
from kubernetes import client, config
config.load_kube_config(
context = 'CONTEXTNAME'
)
To connect to MongoDB in cmd line:
kubectl port-forward svc/mongo-mongodb 27083:27017 -n production &
I then open a new terminal and use PORT_FORWARD_PID=$! to connect
I'm trying to get connect to the MongoDB instance using Python with the Kubernetes-client library, any ideas as to how to accomplish the above?
Define a kubernetes service for example like this, and then reference your mongodb using a connection string similar to mongodb://<service-name>.default.svc.cluster.local
My understanding is that you need to find out your DB Client Endpoint.
That could be achieved if you follow this article MongoDB on K8s
make sure you got the URI for MongoDB.
(example)
“mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo:27017/dbname\_?”
and after that, you can call your DB client in Python script.
import pymongo
import sys
##Create a MongoDB client
client = pymongo.MongoClient('mongodb://......')
##Specify the database to be used
db = client.test
##Specify the collection to be used
col = db.myTestCollection
##Insert a single document
col.insert_one({'hello':'world'})
##Find the document that was previously written
x = col.find_one({'hello':'world'})
##Print the result to the screen
print(x)
##Close the connection
client.close()
Hope that will give you an idea.
Good luck!
I am using Flask SQLalchemy in my google app engine standard environment project to try and connect to my GCP Postgresql database..
According to google docs, the url can be created in this format
# postgres+pg8000://<db_user>:<db_pass>#/<db_name>?unix_socket=/cloudsql/<cloud_sql_instance_name>
and below is my code
from flask import Flask, request, jsonify
import constants
app = Flask(__name__)
# Database configuration from GCP postgres+pg8000
DB_URL = 'postgres+pg8000://{user}:{pw}#/{db}?unix_socket=/cloudsql/{instance_name}'.format(user=user,pw=password,db=dbname, instance_name=instance_name)
app.config['SQLALCHEMY_DATABASE_URI'] = DB_URL
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False # silence the
deprecation warning
sqldb = SQLAlchemy(app)
This is the error i keep getting:
File "/env/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 412, in connect return self.dbapi.connect(*cargs, **cparams) TypeError: connect() got an unexpected keyword argument 'unix_socket'
The argument to specify a unix socket varies depending on what driver you use. According to the pg8000 docs, you need to use unix_sock instead of unix_socket.
To see this in the context of an application, you can take a look at this sample application.
It's been more than 1.5 years and no one has posted the solution yet :)
Anyway, just use the below URI
postgres+psycopg2://<db_user>:<db_pass>#<public_ip>/<db_name>?host=/cloudsql/<cloud_sql_instance_name>
And yes, don't forget to add your systems public IP address to the authorized network.
Example of docs
As you can read in the gcloud guides, an examplary connection string is
postgres+pg8000://<db_user>:<db_pass>#/<db_name>?unix_sock=<socket_path>/<cloud_sql_instance_name>/.s.PGSQL.5432
Varying engine and socket part
Be aware that the engine part postgres+pg8000 varies depending on your database and used driver. Also, depending on your database client library, the socket part ?unix_sock=<socket_path>/<cloud_sql_instance_name>/.s.PGSQL.5432 may be needed or can be omitted, as per:
Note: The PostgreSQL standard requires a .s.PGSQL.5432 suffix in the socket path. Some libraries apply this suffix automatically, but others require you to specify the socket path as follows: /cloudsql/INSTANCE_CONNECTION_NAME/.s.PGSQL.5432.
PostgreSQL and flask_sqlalchemy
For instance, I am using PostgreSQL with flask_sqlalchemy as database client and pg8000 as driver and my working connection string is only postgres+pg8000://<db_user>:<db_pass>#/<db_name>.
I'm building a webapp using the following the architecture:
a postgresql database (called DB),
a NodeJS service (called DBService) using Sequelize to manipulate the DB and Epilogue to expose a REST interface via Express,
a NodeJS service called Backend serving as a backend and using DBService threw REST calls
an AngularJS website called Frontend using Backend
Here are the version I'm using:
PostgreSQL 9.3
Sequelize 2.0.4
Epilogue 0.5.2
Express 4.13.3
My DB schema is quite complex containing 36 tables and some of them contains few hundreds of records. The DB is not meant to write data very often, but mostly to read them.
But recently I created a script in Backend to make a complete check up of datas contained inside the DB: basically this script retrieve all datas of all tables and do some basic checks on datas. Currently the script only does reading on database.
In order to achieve my script I had to remove the pagination limit of Epilogue by using the option pagination: false (see https://github.com/dchester/epilogue#pagination).
But now when I launch my script I randomly obtained that kind of error:
The request failed when trying to retrieve a uniquely associated objects with URL:http://localhost:3000/CallTypes/178/RendererThemes.
Code : -1
Message : Error: connect ECONNRESET 127.0.0.1:3000
The error randomly appears during the script execution: then it's not always this URL which is returned, and even not always the same tables or relations. The error message before code is a custom message returned by Backend.
The URL is a reference to the DBService but I don't see any error in it, even using logging: console.log in Sequelize and DEBUG=express:* to see what happens in Express.
I tried to put some setTimeout in my Backend script to slow it, without real change. I also tried to manipulate different values like PostgreSQL max_connections limit (I set the limit to 1000 connections), or Sequelize maxConcurrentQueries and pool values, but without success yet.
I did not find where I can customize the pool connection of Express, maybe it should do the trick.
I assume that the error comes from DBService, from the Express configuration or somewhere in the configuration of the DB (either in Sequelize/Epilogue or even in the postgreSQL server itself), but as I did not see any error in any log I'm not sure.
Any idea to help me solve it?
EDIT
After further investigation I may have found the answer which is very similar to How to avoid a NodeJS ECONNRESET error?
: I'm using my own object RestClient to do my http request and this object was built as a singleton with this method:
var NodeRestClient : any = require('node-rest-client').Client;
...
static getClient() {
if(RestClient.client == null) {
RestClient.client = new NodeRestClient();
}
return RestClient.client;
}
Then I was always using the same object to do all my requests and when the process was too fast, it created collisions... So I just removed the test if(RestClient.client == null) and for now it seems to work.
If there is a better way to manage that, by closing request or managing a pool feel free to contribute :)
I am trying to get the React-fullstack seed running on my local machine, the first things I want to do is connect the server with a database. in the config.js file there exists this line:
export const databaseUrl = process.env.DATABASE_URL || 'postgresql://demo:Lqk62xgfsdm5UhfR#demo.ctbl5itzitm4.us-east-1.rds.amazonaws.com:5432/membership01';
I do not believe I have access to the account created in the seed so I am trying to create my own AWS PG RDS. I have the following information and can access more:
endpoint: my110.cqw0hciryhbq.us-west-2.rds.amazonaws.com:5432
group-ID: sg-1422f322
VPC-ID: vpc-ec22d922
masterusername: my-username
password: password444
according the the PG documentation I should be looking for something like this:
var conString = "postgres://username:password#localhost/database";
I currently have:
`postgres://my-username:password444#my110.cqw0hciryhbq.us-west-2.rds.amazonaws.com:5432`
What do I put in for 'database'?
Can someone share a method to ping the DB from the seed on my local machine to see if they are connected and working properly?
I can't really speak to anything specific to the React package, however generally when connecting to a Postgres server (whether RDS or your own install), you connect with the name of the database at the end of the connection string, hence:
postgres://username:password#hostname:port/databaseName
So, when you created the RDS database (I assume you already spun up RDS??), you had to tell RDS what you wanted to call the database. If you spun up RDS already, login to AWS console, go to RDS, go to your RDS instances and then select the correct instance, click "Instance Actions" and then "See Details". That page will show you a bunch of details for your RDS instance, one of which is "DB Name". That's the name you put in the connection string.
If you have not already spun up your own RDS instance, then go ahead and do so and you will see where it asks for a database name that you specify.
Hope that helps, let me know if it doesn't.