thread Safe Connection to MongoDb for just read data - multithreading

i use pymongo from pymongo import MongoClientand it work very good but now i change my program to async and multi tread server and got eror like :
can't pickle_thread.lock objects
or
can not pickle local object MongoClient.__init__.<local>.target
i use only database For read from it and dont change any thing so there is no need to lock on Mongo db but threding and queue madule set look on that and get error ,
is there any way to create a thred safe connection to mongo Just for Read data(find , aggregate)
my server call 1 to 5 class dynamicly (whitch is multi thread) and all of them connect to DB

Related

Monogdb / pymongo 'authenticate' method call fails because no such method exists

I have problem connecting to MongoDb using pymongo driver (Python 3.10, pymongo 4.2.0, MongoDb 6), specifically, authentication steps fails. Please see below my code:
import pymongo
from pymongo import MongoClient
client=MongoClient('mongodb://10.10.1.8:27017')
client.admin.authenticate('dev01','my_password_here')
I am behind company firewall, that's why you see internal IP address - 10.10.1.8
When I try to test run Python code to store data, I am getting the following error:
File "C:\Users\ABC\Documents\python_development\pymongo_test.py", line 7, in <module>
client.admin.authenticate('dev01','my_password_here')
File "C:\Users\ABC\AppData\Local\Programs\Python\Python310\lib\site-packages\pymongo\collection.py", line 3194,
in __call__ raise TypeError(
TypeError: 'Collection' object is not callable. If you meant to call the 'authenticate' method
on a 'Database' object it is failing because no such method exists.
MongoDb sits on virtual Linux Ubuntu server that sits on top of Linux Debian server. My laptop has Windows 10.
The code looks correct, any ideas why this is happening?
Migration guide from pymongo 3.x to 4.x
https://pymongo.readthedocs.io/en/stable/migrate-to-pymongo4.html?highlight=authenticate#database-authenticate-and-database-logout-are-removed reads:
Removed pymongo.database.Database.authenticate() and pymongo.database.Database.logout(). Authenticating multiple users on the same client conflicts with support for logical sessions in MongoDB 3.6+. To authenticate as multiple users, create multiple instances of MongoClient. Code like this:
client = MongoClient()
client.admin.authenticate('user1', 'pass1')
client.admin.authenticate('user2', 'pass2')
can be changed to this:
client1 = MongoClient(username='user1', password='pass1')
client2 = MongoClient(username='user2', password='pass2')

Timed out fetching a new connection from the connection pool prisma

I have a nestjs scheduler which will run every one hour
I'm using multiple library to connect to postgres database through nestjs app
prisma
Knex
I have scheduler table that will have url to run on what datetime
& a rule table that will have tablename, columnname, logicaloperator(i.e >,<,=,!=) & conditional operator(AND, OR)
knex will create a query that is stored in database
for(const t of schedules) {
//this wont stop and will make call simultanously to url
fetch("url").catch()
}
the url will insert records it will take 1, 2, 3 hrs depending on the url
but after certain time
i'm getting Timed out fetching a new connection from the connection pool prisma error
is it because i'm using multiple client to connect database?
You can configure the connection_limit and pool_timeout parameters while passing them in the connection string. You can set the connection_limit to 1 to make sure that prisma doesn't initiate new database connections, this way you won't get timeout errors.
Increasing the pool timeout would give the query engine more time to process queries in the queue.
Reference for connection_limit and pool_timeout parameters: Reference.

How can I change the name of a task that Celery sends to a backend?

I have built a queue system using Celery that accepts web requests and executes some tasks to act on those requests. I'm using Redis as the backend for Celery, but I imagine this question would apply to all backends.
Celery is returning the task name as celery-task-meta-<task ID> and storing it in the backend. This is meaningless to me. How can I change the name of the result that celery sends to Redis? I have searched through all of Celery's documentation to try to figure out how to do this.
The Redis CLI monitor is showing that Celery is using the SETEX method and sending the following input:
"SETEX" "celery-task-meta-dd32ded3-00aa-4884-8b21-42f8332e7fac"
"86400" "{\"status\": \"SUCCESS\", \"result\": {\"mode\": \"staging\",
\"123\": 50}, \"traceback\": null, \"children\": [], \"task_id\":
\"dd32ded3-00aa-4884-8b21-42f8332e7fac\", \"date_done\":
\"2019-05-09T16:44:12.826951\", \"parent_id\":
\"2e99d958-cd5a-4700-a7c2-22c90f387f28\"}"
The "result": {...} that you can see in the SETEX command above is what the task returns. I would like the SETEX to be more along the lines of:
"SETEX" "mode-staging-123-50-SUCCESS" "{...}", so that when I view all my keys in Redis, the name of the key is informational to me.
Here's another example view of the keys in my Redis cache that are meaningless:
You can't change this. The task key is created by ResultConsumer class which Redis backend uses. ResultConsumer then delegates creation of the task key to BaseKeyValueStoreBackend class. The get_key_for_task method which actually creates the key uses hardcoded task_keyprefix set to celery-task-meta-. So, to change the behaviour, you would have to subclass these classes. There's not configuration option for it.

NodeJS/Express: ECONNRESET when doing multiples requests using Sequelize/Epilogue

I'm building a webapp using the following the architecture:
a postgresql database (called DB),
a NodeJS service (called DBService) using Sequelize to manipulate the DB and Epilogue to expose a REST interface via Express,
a NodeJS service called Backend serving as a backend and using DBService threw REST calls
an AngularJS website called Frontend using Backend
Here are the version I'm using:
PostgreSQL 9.3
Sequelize 2.0.4
Epilogue 0.5.2
Express 4.13.3
My DB schema is quite complex containing 36 tables and some of them contains few hundreds of records. The DB is not meant to write data very often, but mostly to read them.
But recently I created a script in Backend to make a complete check up of datas contained inside the DB: basically this script retrieve all datas of all tables and do some basic checks on datas. Currently the script only does reading on database.
In order to achieve my script I had to remove the pagination limit of Epilogue by using the option pagination: false (see https://github.com/dchester/epilogue#pagination).
But now when I launch my script I randomly obtained that kind of error:
The request failed when trying to retrieve a uniquely associated objects with URL:http://localhost:3000/CallTypes/178/RendererThemes.
Code : -1
Message : Error: connect ECONNRESET 127.0.0.1:3000
The error randomly appears during the script execution: then it's not always this URL which is returned, and even not always the same tables or relations. The error message before code is a custom message returned by Backend.
The URL is a reference to the DBService but I don't see any error in it, even using logging: console.log in Sequelize and DEBUG=express:* to see what happens in Express.
I tried to put some setTimeout in my Backend script to slow it, without real change. I also tried to manipulate different values like PostgreSQL max_connections limit (I set the limit to 1000 connections), or Sequelize maxConcurrentQueries and pool values, but without success yet.
I did not find where I can customize the pool connection of Express, maybe it should do the trick.
I assume that the error comes from DBService, from the Express configuration or somewhere in the configuration of the DB (either in Sequelize/Epilogue or even in the postgreSQL server itself), but as I did not see any error in any log I'm not sure.
Any idea to help me solve it?
EDIT
After further investigation I may have found the answer which is very similar to How to avoid a NodeJS ECONNRESET error?
: I'm using my own object RestClient to do my http request and this object was built as a singleton with this method:
var NodeRestClient : any = require('node-rest-client').Client;
...
static getClient() {
if(RestClient.client == null) {
RestClient.client = new NodeRestClient();
}
return RestClient.client;
}
Then I was always using the same object to do all my requests and when the process was too fast, it created collisions... So I just removed the test if(RestClient.client == null) and for now it seems to work.
If there is a better way to manage that, by closing request or managing a pool feel free to contribute :)

SQLAlchemy, PostgreSQL Connection Pooling

Hopefully this should be a quick answer for somebody. I've looked through the docs a bit, but still haven't found a definitive answer. I have a number of 'idle' connections that stick around, even if I perform a session.close() in SQLAlchemy. Are these idle connections the way SQLAlchemy/Postgres handle connection pooling?
This is the query I used to check db connection activity
SELECT * FROM pg_stat_activity ;
Here is sample code:
from flask import Flask
from flask.ext.sqlalchemy import SQLAlchemy
application = Flask(__name__)
application.config.from_object('config')
db = SQLAlchemy(application)
class Brand(db.Model):
id = db.Column(db.Integer, primary_key=True)
#application.route('/')
def documentation():
all = Brand.query.all()
db.session.remove() #don't need this since it's called on teardown
return str(len(all))
if __name__ == '__main__':
application.run(host='0.0.0.0', debug=True)
Yes. Closing a session does not immediately close the underlying DBAPI connection. The connection gets put back into the pool for subsequent reuse.
From the SQLAlchemy docs:
[...] For each Engine encountered, a Connection is associated with it, which is acquired via the Engine.contextual_connect() method. [...]
Then, Engine.contextual_connect() points you to Engine.connect(), which states the following:
The Connection object is a facade that uses a DBAPI connection internally in order to communicate with the database. This connection is procured from the connection-holding Pool referenced by this Engine. When the close() method of the Connection object is called, the underlying DBAPI connection is then returned to the connection pool, where it may be used again in a subsequent call to connect().

Resources