What are your suggests?
Is it better to place TRTCHttpServer to main form or to datamodule with other server components? In demos apps there are both implemetnation. Component will be set up with multithreaded property TRUE. As far as I know if it is separated in main form datamodule is created with every thread when client connected. Is it true?
Also, If I want to make a pool algorithm for DB connection (TZConnection) where should be put? In datamodule with other server components and DBAware components or in separate datamodule? Pool algorithm would be threaded like this:
DB Connection pool
The server should have db connection pool and be multithreaded. It could be achieved by RTC components. It would serves as the 2nd tier of 3 tier architecture. The 3rd tier is MySQL connected via ZeosLib.
Thanks for answers.
PS: I have searched for any other suggestions but I could not make it clear. Please help.
I guess you have several questions in one...
AFAIK RTC uses a thread pool, for better scalability and less resource use. So you can not assume you have one thread per client.
It is always preferred to place your logic in a datamodule, and NEVER in a main form: do not mix UI and server - for instance, it could make sense to host your server in a service, on production.
If you are using ZeosLib, the connection pool you are talking about has nothing to do with the ZDBC connection pool.
Related
In order to improve application performance, thought of trying & creating the redis connection pool to share the load, instead of rotating single same redis connection to cater all the incoming requests, as the per the suggestion by Redis team here
What would be the best way of creating StackExchange.Redis connection pool for same Redis server config using C# & keep rotating one connection after another from the pool to cater the incoming request?
Is there any SDK/nuget package available to create Redis connection pool?
At present we are reuse the single ConnectionMultiplexer created using Lazy pattern via singleton class which will initiate single redis connection object on the very first request & will be reused throughout the application lifetime.
P.S: thread safe can be ignored as all the instance in the connection pool using same Redis server config.
There's a library that I have implemented two years ago exactly for that requirement. It's thread safe and it creates the connection pool lazily.
Also, you can use built in implementations of connection selection strategy such as round-robin and load based.
The NuGet is https://www.nuget.org/packages/StackExchange.Redis.MultiplexerPool/
You can see sample here https://github.com/mataness/StackExchange.Redis.MultiplexerPool/blob/master/samples/RedisConnectionPoolConsoleApp/Program.cs
I wanted to create two database connection and periodically check the status of database connection. If database 1 fails I want to switch the connection to database 2. Can you give me some pointers please?
The first question is what exactly are you trying to do during the 'failover'. Is the Node.js app doing some work for users, or is it just the monitoring script? Then, next is how are your DBs configured (I'm guessing it's not RAC)?
There are all sorts of 'high availability' options and levels in Oracle, many of which are transparent to the application and are available when you use a connection pool. A few are described in the node-oracledb doc - look at Connections and High Availability. Other things can be configured in a tnsnames.ora file such as connection retries if connection requests fail.
As the most basic level the answer to your question is that you could periodically check whether a query works, or just use connection.ping(). If you go this 'roll your own' route, use a connection pool with size 1 for each DB. Then periodically get the connection from the pool and use it.
If you update your question with details, it would be easier to answer.
We are trying to implement the strategy outlined in the following presentation (slides 13-18) using nodejs/mongo-native driver.
https://www.slideshare.net/mongodb/securing-mongodb-to-serve-an-awsbased-multitenant-securityfanatic-saas-application
In summary:
Create a connection pool to mongodb from node.js.
For every request for a tenant, get a conenction from the pool and "authenticate" it. Use the authenticated conenection to serve the request. After response, return the connection to the pool.
Im able to create a connection pool to mongodb without specifying any database using the mongo-native driver like so:
const client = new MongoClient('mongodb://localhost:27017', { useNewUrlParser: true, poolSize: 10 });
However, in order to get a db object, I need to do the following:
const db = client.db(dbName);
This is where I would like to authenticate the connection, and it AFAICS, this functionality has been deprecated/removed from the more recent mongo drivers, node.js and java.
Going by the presentation, looks like this was possible to do with older versions of the Java driver.
Is it even possible for me to use a single connection pool and authenticate tenants to individual databases using the same connections ?
The alternative we have is to have a connection pool per tenant, which is not attractive to us at this time.
Any help will be appreciated, including reasons why this feature was deprecated/removed.
it's me from the slides!! :) I remember that session, it was fun.
Yeah that doesn't work any more, they killed this magnificent feature like 6 months after we implemented it and we were out with it in Beta at the time. We had to change the way we work..
It's a shame since till this day, in Mongo, "connection" (network stuff, SSL, cluster identification) and authentication are 2 separate actions.
Think about when you run mongo shell, you provide the host, port, replica set if any, and your in, connected! But not authenticated. You can then authenticate to user1, do stuff, and then authenticate to user2 and do stuff only user2 can do. And this is done on the same connection! without going thru the overhead creating the channel again, SSL handshake and so on...
Back then, the driver let us have a connection pool of "blank" connections that we could authenticate at will to the current tenant in context of that current execution thread.
Then they deprecated this capability, I think it was with Mongo 2.4. Now they only supported connections that are authenticated at creation. We asked enterprise support, they didn't say why, but to me it looked like they found this way is not secured, "old" authentication may leak, linger on that "not so blank" reusable connection.
We made a change in our multi-tenancy infra implementation, from a large pool of blank connections to many (small) pools of authenticated connections, a pool per tenant. These pools per tenant can be extremely small, like 3 or 5 connections. This solution scaled nicely to several hundreds of tenants, but to meet thousands of tenants we had to make all kinds of optimizations to create pools as needed, close them after idle time, lazy creation for non-active or dormant tenants, etc. This allowed us to scale even more... We're still looking into solutions and optimizations.
You could always go back to a global pool of authenticated connections to a Mongo user that have access to multiple databases. Yes, you can switch database on that same authenticated connection. You just can't switch authentication..
This is an example of pure Mongo Java driver, we used Spring which provide similar functionality:
MongoClient mongoClient = new MongoClient();
DB cust1db = mongoClient.getDB("cust1");
cust1db.get...
DB cust2db = mongoClient.getDB("cust2");
cust2db.get...
Somewhat related, I would recommend looking at MongoDB encryption at rest, it's an enterprise feature. The only way to encrypt each database (each customer) according to a different key.
I have a Node.js script and a PostgreSQL database, and I'll be using a library that maintains a pool of connections to the database.
Say I have a script that queries the database multiple times (not a transaction) at different parts of the script, how do I tell if I should acquire a single connection/client and reuse it throughout*, or acquire a new client from the pool for each query? (Both works but which has better performance?)
*task in the pg-promise library, connect in the node-postgres library.
...
// Acquire connection from pool.
(Database query)
(Non-database-related code)
(Database query)
// Release connection to pool.
...
or
...
// Acquire connection from pool.
(Database query)
// Release connection to pool.
(Non-database-related code)
// Acquire connection from pool.
(Database query)
// Release connection to pool.
...
I am not sure, how the pool you are using works, but normally they should reuse the connections (don't disconnect after use), so you do not need to be concerned with caching connections.
You can use node-postgres module that will make you task easier.
And about your question when to use pool here is the brief answer.
PostgreSQL server can only handle 1 query at a time per connection.
That means if you have 1 global new pg.Client() connected to your
backend your entire app is bottleknecked based on how fast postgres
can respond to queries. It literally will line everything up, queuing
each query. Yeah, it's async and so that's alright...but wouldn't you
rather multiply your throughput by 10x? Use pg.connect set the
pg.defaults.poolSize to something sane (we do 25-100, not sure the
right number yet).
new pg.Client is for when you know what you're doing. When you need a
single long lived client for some reason or need to very carefully
control the life-cycle. A good example of this is when using
LISTEN/NOTIFY. The listening client needs to be around and connected
and not shared so it can properly handle NOTIFY messages. Other
example would be when opening up a 1-off client to kill some hung
stuff or in command line scripts.
here is the link of that module.
Hopefully this will help.
https://github.com/brianc/node-postgres
You can see the documentation over there and about the pooling. Thanks :)
And about closing the pool it provides the callback done which can be called when you want to close that pool.
I'm using the Node native client 1.4 in my application and I found something in the document a little bit confusing:
A Connection Pool is a cache of database connections maintained by the driver so that connections can be re-used when new connections to the database are required. To reduce the number of connection pools created by your application, we recommend calling MongoClient.connect once and reusing the database variable returned by the callback:
Several questions come in mind when reading this:
Does it mean the db object also maintains the fail over feature provided by replica set? Which I thought should be the work of MongoClient (not sure about this but the C# driver document does say MongoClient maintains replica set stuff)
If I'm reusing the db object, when should I invoke the db.close() function? I saw the db.close() in every example. But shouldn't we keep it open if we want to reuse it?
EDIT:
As it's a topic about reusing, I'd also want to know how we can share the db in different functions/objects?
As the project grows bigger, I don't want to nest all the functions/objects in one big closure, but I also don't want to pass it to all the functions/objects.
What's a more elegant way to share it among the application?
The concept of "connection pooling" for database connections has been around for some time. It really is a common sense approach as when you consider it, establishing a connection to a database every time you wish to issue a query is very costly and you don't want to be doing that with the additional overhead involved.
So the general principle is there that you have an object handle ( db reference in this case ) that essentially goes and checks for which "pooled" connection it can use, and possibly if the current "pool" is fully utilized then and create another ( or a few others ) connection up to the pool limit in order to service the request.
The MongoClient class itself is just a constructor or "factory" type class whose purpose is to establish the connections and indeed the connection pool and return a handle to the database for later usage. So it is actually the connections created here that are managed for things such as replica set fail-over or possibly choosing another router instance from the available instances and generally handling the connections.
As such, the general practice in "long lived" applications is that "handle" is either globally available or able to be retrieved from an instance manager to give access to the available connections. This avoids the need to "establish" a new connection elsewhere in your code, which has already been stated as a costly operation.
You mention the "example" code which is often present through many such driver implementation manuals often or always calling db.close. But these are just examples and not intended as long running applications, and as such those examples tend to be "cycle complete" in that they show all of the "initialization", the "usage" of various methods, and finally the "cleanup" as the application exits.
Good application or ODM type implementations will typically have a way to setup connections, share the pool and then gracefully cleanup when the application finally exits. You might write your code just like "manual page" examples for small scripts, but for a larger long running application you are probably going to implement code to "clean up" your connections as your actual application exits.