Python psycopg2 executing select pg_notify doesn't work - python-3.x

This is my very first question at StackOverflow so if i am doing something wrong please be gentle.
I'm struggling with executing SELECT pg_notify from Python script. It seems that it doesn't work at all.
My NodeJS server is listening 'testnotify' channel using pg-promise. I'm putting this just for completeness because it is working.
db.connect({direct: true})
.then(sco => {
sco.client.on('notification', data => {
console.log('Received:', data);
});
return sco.none('LISTEN $1~', 'testnotify');
})
.catch(error => {
console.log('Error:', error);
});
My Python script should rise notification after series of successful db operations.
I'm doing this like that
conn = psycopg2.connect(conn_string)
cur = conn.cursor()
cur.execute("SELECT pg_notify('%s', '%s');" % ('testnotify', 'blabla'))
or like that
query = "SELECT pg_notify('testnotify', 'blabla');"
print(query)
cur.execute(query)
I've tried in similar way with NOTIFY testnotify, 'blabla' and nothing works. Nothing happen at NodeJS side.
But when i copy result of print(query) from Python console and execute it directly from PostgreSQL then it is working like a charm.
I don't understand what's wrong with my code.
I'm using psycopg2 2.7.5, PostgreSQL 10, Node 10 LTS, pg-promise at Windows 10.
Side note: This is not a problem with Node because it is working when pg_notify or notify is raised using trigger at source table in postgresql or when executing notification as regular sql query at db. It is not working only when i'm trying to raise notification from python script.
After two days of juggling with this i think that this is something obvious and stupid but i can't see it. Or maybe it is just impossible...
Please help.

Oh my... i just found solution... even two ways to solve this.
The clue was in psycopg documentation... obvious huh?
to send notification using
cur.execute("SELECT pg_notify('%s', '%s');" % ('testnotify', 'blabla'))
one have to set connection to autocommit like that
import psycopg2
import psycopg2.extensions
conn.set_isolation_level(psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT)
or simply if you don't want autocommit then after doing
cur.execute("SELECT pg_notify('%s', '%s');" % ('testnotify', 'blabla'))
you have to commit it
conn.commit()
aaand now Node is receiving notifications from postgresql via python

Related

pymysql.err.OperationalError: (2006, "MySQL server has gone away (TimeoutError(110, 'Connection timed out'))")

My question is about reconnecting to MySQL server if any error encounters.
I am connecting to the MySQL server in Flask:
connection = pymysql.connect(host='host',
user='user',
connect_timeout= 31536000,
password='passwd',
db='db_name',
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
and for the query using the cursor as well:
Flask route code:
#app.route("/chart", methods=['GET', 'POST'])
def chart():
try:
with connection.cursor() as cursor:
#line chart open tickets
query = "select createdDate,rootCause,requestId from db_name;"
df = pd.read_sql(query, connection)
print(df)
except pymysql.MySQLError as e:
print(e)
I want to reconnect to db when I get the error:
pymysql.err.OperationalError: (2006, "MySQL server has gone away (TimeoutError(110, 'Connection timed out'))")
Please help me find the solution for this error.
How to reconnect to the database when any error encounters.
Thanks in advance!
Looks like you are using a single connection forever. Try to create a new connection every time and close it once the required queries are run.
This issue can be avoided through this.
Setting pool_pre_ping=True when calling create_engine() seems to have helped quite a bit in my case.
Example:
engine = create_engine(db_connection, pool_pre_ping=True)
From the SQLAlchemy docs on pool_pre_ping:
"If True will enable the connection pool “pre-ping” feature that tests connections for liveness upon each checkout."

SqlAlchemy sessions get stuck when not using them

I'm having a hard time implementing a "MySqlClient" class for my application. My application consists of several modules which have to make use of my database & some of the modules are running on other threads.
My intention is to make an instance for every module that needs to communicate with my MySql database. For example: every client connecting to a websocket server creates his own instance, a telegram bot client has its own instance, ..
I've been searching for days now, I've read the docs, searched the forums .. but somehow I'm missing something or I'm not implementing it the right way.
This is my class:
class MySqlClient():
engine = None
Session = None
def __init__(self):
# create engine
if MySqlClient.engine == None:
MySqlClient.engine = sqlalchemy.create_engine("mysql+mysqlconnector://{0}:{1}#{2}/{3}".format(
state.config["mysql"]["user"],
state.config["mysql"]["password"],
state.config["mysql"]["host"],
state.config["mysql"]["database"]
))
MySqlClient.Session = scoped_session(sessionmaker(bind=MySqlClient.engine))
Base.metadata.create_all(MySqlClient.engine)
self.session = MySqlClient.Session()
def get_budget(self, budget_id):
try:
q = self.session.query(
(Budget.amount).label("budgetAmount"),
func.sum(BudgetRecord.amount).label("total")
).all().filter(Budget.id == budget_id).join(BudgetRecord).filter(extract("month", BudgetRecord.ts) == datetime.datetime.now().month)
self.session.close()
return { "budgetAmount": q[0].budgetAmount, "total": 0.0 if q[0].total == None else q[0].total }
except Exception as ex:
logging.error(ex)
return None
When I start my application everything runs fine, I can execute the method "get_budget" returning the data. However, if after this I wait for 5 minutes, the method won't run again (if I don't wait, it still works). After about 15 minutes after I made the call, the query finally fails saying the MySql connection has dropped:
(mysql.connector.errors.OperationalError) MySQL Connection not available.
I also tried getting a new session before executing new queries. That didn't help either.
I've done things like this before but it's the first time I'm using an ORM & I'd like to keep the benefits of using ORM.
Any help would be greatly appreciated,
Regards

pg-promise UTF connection string

pg-promise does not understand UTF passwords?.. I can't make it work with them. Tried on linux and osx, postgres 9.3, 9.5 - seems to be not specific to versions. Looked into code. pg-promise uses pg, which uses pg-connect-string which is build based on back pg. Can't find the the root of the problem. Please help.
code to reproduce:
MacBook-Air:js vao$ cat 2.js
var pgp = require("pg-promise")();
var cs = 'postgresql://utf:утф#127.0.0.1:5433/a';
var db = pgp(cs);
db.connect()
.then(function (obj) {
obj.done(); // success, release the connection;
})
.catch(function (error) {
console.log("ERROR:", error.message || error);
});
console.log(cs);
returns:
MacBook-Air:js vao$ node 2.js
postgresql://utf:утф#127.0.0.1:5433/a
ERROR: password authentication failed for user "utf"
Using same connection string with psql:
MacBook-Air:js vao$ psql 'postgresql://utf:утф#127.0.0.1:5433/a'
psql (9.5.3)
Type "help" for help.
a=> \q
Trying bad password deliberately with same connection string:
MacBook-Air:js vao$ psql 'postgresql://utf:утфWrongPassword#127.0.0.1:5433/a'
psql: FATAL: password authentication failed for user "utf"
MacBook-Air:js vao$
As the author of pg-promise, I'd like at least to offer some guidance here on where to look, not the exact answer, as I've never dealt with UTF passwords myself.
pg-promise uses node-postgres, which in turn uses pg-connection-string to parse the connection. If there is an issue, then it is most likely inside pg-connection-string.
I would recommend trying Unicode notation that relies on %. Maybe it will work. Maybe not, I've never tried. Also the following question was never answered: Node.js postgres UTF connect string, which isn't reassuring either.
Sorry, I cannot be of more help at present.

Error while executing a Ms Access db through nodejs

I am trying to access a Ms Access 2007 db trough nodejs in Windows 7, but even this simple query won't work. I receive the following message in the command prompt (this is a translation, the original is in portuguese): "Operation not allowed when object is closed". Anybody have any answers? The javascript code is written below:
var ADODB = require('node-adodb');
var connection = ADODB.open('Provider=Microsoft.Jet.OLEDB.4.0;Data Source=C:\\teste\\dbteste.accdb;Persist Security Info=False;');
ADODB.debug = true;
connection
.query('SELECT * FROM [Tabela];')
.on('done', function (data){
console.log('Result:'.green.bold, data);
})
.on('fail', function (data){
});
Thanks!
Try to change the connectionString :
Provider=Microsoft.Jet.OLEDB.4.0;Data Source=C:\\teste\\dbteste.accdb;Persist Security Info=False;
by
Provider=Microsoft.ACE.OLEDB.12.0;Data Source=C:\\teste\\dbteste.accdb;Persist Security Info=False;
Also, check if the .accdb path is right.
Moreover, add ADODB.encoding = 'iso-8859-15'; for portuguese encoding.
I'm not sure but you may replace [Tabela] by simply Tabela. The ; at the sql query end is not necessary.
I encountered this issue too. This seems to be a bug in node-adodb module.(last seen at 8/31/2015)
In the linked page you can see that two people have encountered the same problem and the developer of node-adodb has asked them to upload the example( on 7/17/2015)

How to get Vertica copy from stdin response in NodeJS?

I'm using Vertica Database 07.01.0100 and node.js v0.10.32. I'm using the vertica nodejs module by vanberger. I want to send a copy from stdin command, and that is working using this example: https://gist.github.com/soldair/5168249. Here's my code:
var loadStreamQuery = "COPY \""+input('table-name')+"\" FROM STDIN DELIMITER ',' skip 1 direct;"
var stream = through();
connection.copy(loadStreamQuery,function(transfer, success, fail){
stream.on('data',function(data){
log.info("loaddata: on data =>",data);
transfer(data);
});
stream.on('end',function(data){
log.info("loaddata: on end =>", data);
if(data) {
transfer(data);
}
success();
callback(null,{'result':{'status':'200','result':"Data was loaded successfully into Vertica"}});
});
stream.on('error',function(err){
fail();
log.error("loaddata: on error =>",err);
connection.disconnect();
});
stream.write(new Buffer(file));
stream.end();
}
);
But, if the data file has more columns than the target table, it doesn't say so. It just happily runs, copying nothing and then ends. When I look at the table, nothing has been loaded. If I do the same thing in dbvisualizer, it tells me that 0 rows were affected.
I would like to examine the status of the command, but I don't know how. Is there some other event that I need to listen for? Do I need to save the result of copy to a variable and listen there, like I do with query calls? I'm a nodejs noob, so if the answer is obvious, just let me know.
Thanks!
I don't really think it is a node.js thing as much as it is a Vertica thing.
You need to look for rejected rows. You can find some good examples in the docs here.
If you want to actually see the rows that reject, you can do this by using a COPY statement clause like REJECTED DATA AS table "loader_rejects". Alternatively you can send it to a file on the cluster. I'm not aware of a way to get rejected rows to a local file using STDIN.
If you don't care at all about the actual data, and just want to know how many rows loaded and rejected... you can use GET_NUM_REJECTED_ROWS() and GET_NUM_ACCEPTED_ROWS(). I think COPY will actually also return a result set with just the count of loaded rows, at least that is what I've noticed in the past.
So I guess as an example, if you want to see how many rows were accepted and rejected, you could do:
connection.query "SELECT GET_NUM_REJECTED_ROWS() AS REJECTED_ROWS, GET_NUMBER_ACCEPTED_ROWS() AS ACCEPTED_ROWS", (err, resultset) -> log.info( err, resultset.fields, resultset.rows, resultset.status )

Resources