Hopefully this should be a quick answer for somebody. I've looked through the docs a bit, but still haven't found a definitive answer. I have a number of 'idle' connections that stick around, even if I perform a session.close() in SQLAlchemy. Are these idle connections the way SQLAlchemy/Postgres handle connection pooling?
This is the query I used to check db connection activity
SELECT * FROM pg_stat_activity ;
Here is sample code:
from flask import Flask
from flask.ext.sqlalchemy import SQLAlchemy
application = Flask(__name__)
application.config.from_object('config')
db = SQLAlchemy(application)
class Brand(db.Model):
id = db.Column(db.Integer, primary_key=True)
#application.route('/')
def documentation():
all = Brand.query.all()
db.session.remove() #don't need this since it's called on teardown
return str(len(all))
if __name__ == '__main__':
application.run(host='0.0.0.0', debug=True)
Yes. Closing a session does not immediately close the underlying DBAPI connection. The connection gets put back into the pool for subsequent reuse.
From the SQLAlchemy docs:
[...] For each Engine encountered, a Connection is associated with it, which is acquired via the Engine.contextual_connect() method. [...]
Then, Engine.contextual_connect() points you to Engine.connect(), which states the following:
The Connection object is a facade that uses a DBAPI connection internally in order to communicate with the database. This connection is procured from the connection-holding Pool referenced by this Engine. When the close() method of the Connection object is called, the underlying DBAPI connection is then returned to the connection pool, where it may be used again in a subsequent call to connect().
Related
I have a multithreading application that attach a session for each user of my CSV file.
def main():
executor = ThreadPoolExecutor()
futures = [executor.submit(bot.run,
user["login"],
user["pwd"],) for user in client_data_file]
The code above then call the run method to perform http operations after log in into the website:
def run(login, pwd):
session = login_service(user, pwd)
# Here I'm making http operations for each user
def login_service(user, pwd):
s = self.get_session()
def get_session()
"""Function so specify session to the particular thread on which it's running"""
if not hasattr(thread_local, "session"):
try:
thread_local.session = http.client.HTTPSConnection("url_for_my_site", 80)
return thread_local.session
except Exception as e:
print(e)
To do this http ops, I need to keep this user connection alive.
I was used requests library but is poor performance compared with http Python standard library.
Using requests, is simple:
session = requests.Session()
But, for http.client, I'm in doubt if that HTTPSConnection object keeps the session alive.
I'm trying keep a http session alive using low code libraries.
The need is not for make multiple requests at the same time, but for gain some seconds in high traffic website for each login in my CSV (that's why threading).
Feel free to suggest another low level library
I have a server-client application using xmlrpc.server and xmlrpc.client where the clients request data from the server. As the server only returns this data once certain conditions are met, the clients make the same call over and over again, and currently the tcp connection is re-initiated with each call. This creates a noticeable delay.
I have a fixed number of clients that connect to the server at the beginning of the application and shutdown when the whole application is finished.
I tried to google about keeping the TCP connection open, but all I could find either talked about xmlrpclib or did not apply to the python version.
Client-side code:
import xmlrpc.client as xc
server = xc.ServerProxy(host_IP,8000)
var = False
while type(var)==bool:
var = server.pull_update()
# this returns "False" while the server determines the conditions
# for the client to receive the update aren't met; and the update
# once the conditions are met
Server-side, I am extending xmlrpc.server.SimpleXMLRPCServer with the default xmlrpc.server.SimpleXMLRPCRequestHandler. The function in question is:
def export_pull_update(self):
if condition:
return self.var
else:
return False
Is there a way to get xmlrpc.server to keep the connection alive between calls for the server?
Or should I go the route of using ThreadingMixIn and not completing the client-request until the condition is met?
I am using Flask SQLalchemy in my google app engine standard environment project to try and connect to my GCP Postgresql database..
According to google docs, the url can be created in this format
# postgres+pg8000://<db_user>:<db_pass>#/<db_name>?unix_socket=/cloudsql/<cloud_sql_instance_name>
and below is my code
from flask import Flask, request, jsonify
import constants
app = Flask(__name__)
# Database configuration from GCP postgres+pg8000
DB_URL = 'postgres+pg8000://{user}:{pw}#/{db}?unix_socket=/cloudsql/{instance_name}'.format(user=user,pw=password,db=dbname, instance_name=instance_name)
app.config['SQLALCHEMY_DATABASE_URI'] = DB_URL
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False # silence the
deprecation warning
sqldb = SQLAlchemy(app)
This is the error i keep getting:
File "/env/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 412, in connect return self.dbapi.connect(*cargs, **cparams) TypeError: connect() got an unexpected keyword argument 'unix_socket'
The argument to specify a unix socket varies depending on what driver you use. According to the pg8000 docs, you need to use unix_sock instead of unix_socket.
To see this in the context of an application, you can take a look at this sample application.
It's been more than 1.5 years and no one has posted the solution yet :)
Anyway, just use the below URI
postgres+psycopg2://<db_user>:<db_pass>#<public_ip>/<db_name>?host=/cloudsql/<cloud_sql_instance_name>
And yes, don't forget to add your systems public IP address to the authorized network.
Example of docs
As you can read in the gcloud guides, an examplary connection string is
postgres+pg8000://<db_user>:<db_pass>#/<db_name>?unix_sock=<socket_path>/<cloud_sql_instance_name>/.s.PGSQL.5432
Varying engine and socket part
Be aware that the engine part postgres+pg8000 varies depending on your database and used driver. Also, depending on your database client library, the socket part ?unix_sock=<socket_path>/<cloud_sql_instance_name>/.s.PGSQL.5432 may be needed or can be omitted, as per:
Note: The PostgreSQL standard requires a .s.PGSQL.5432 suffix in the socket path. Some libraries apply this suffix automatically, but others require you to specify the socket path as follows: /cloudsql/INSTANCE_CONNECTION_NAME/.s.PGSQL.5432.
PostgreSQL and flask_sqlalchemy
For instance, I am using PostgreSQL with flask_sqlalchemy as database client and pg8000 as driver and my working connection string is only postgres+pg8000://<db_user>:<db_pass>#/<db_name>.
i use pymongo from pymongo import MongoClientand it work very good but now i change my program to async and multi tread server and got eror like :
can't pickle_thread.lock objects
or
can not pickle local object MongoClient.__init__.<local>.target
i use only database For read from it and dont change any thing so there is no need to lock on Mongo db but threding and queue madule set look on that and get error ,
is there any way to create a thred safe connection to mongo Just for Read data(find , aggregate)
my server call 1 to 5 class dynamicly (whitch is multi thread) and all of them connect to DB
Architecturally what is the best way to handle JDBC with multiple threads? I have many threads concurrently accessing the database. With a single connection and statement I get the following error message:
org.postgresql.util.PSQLException: This ResultSet is closed.
Should I use multiple connections, multiple statements or is there a better method? My preliminary thought was to use one statement per thread which would guarantee a single result set per statement.
You should use one connection per task. If you use connection pooling you can't use prepared statements prepared by some other connection. All objects created by connection (ResultSet, PreparedStatements) are invalid for use after connection returned to pool.
So, it's alike
public void getSomeData() {
Connection conn = datasource.getConnection();
PreparedStatement st;
try {
st = conn.prepareStatement(...);
st.execute();
} finally {
close(st);
close(conn);
}
}
So in this case all your DAO objects take not Connection, but DataSource object (java.sql.DataSource) which is poolable connection factory indeed. And in each method you first of all get connection, do all your work and close connection. You should return connection to pool as fast as possible. After connection returned it may not be physically closed, but reinitialized (all active transactions closed, all session variables destroyed etc.)
Yes, use multiple connections with a connection pool. Open the connection for just long enough to do what you need, then close it as soon as you're done. Let the connection pool take care of the "physical" connection management for efficiency.