Programmatically create multiple connections for TcpNetClientConnectionFactory - spring-integration

Continuing the conversation from
this question:
Two-part question here:
Can a TcpNetClientConnectionFactory have multiple connections to its upstream
server, if the host and port are the same?
If so, how can I programmatically build a new connection for that connection
factory? I see the buildNewConnection method, but it is protected.
The first connection is automatically built as soon as the first Message
passes through the factory. What we need to do is notice when following Messages
have a different ip_connectionId, stand up a new connection, and route those
Messages to that new connection. Obviously, Messages with the original
ip_connectionId would still be routed to the original connection.
Not sure whether it would be better to create multiple connections off of one
connection factory, or create a whole new connection factory, sending message
handler, and receiving channel adapter for each new connection.

If the inbound connection factory is a TcpNetServerConnectionFactory, you can simply use a ThreadAffinityClientConnectionFactory because each inbound connection gets its own thread.
You would call getConnection(). This will bind the connection to the thread (and you can obtain the connection id from it), but you don't really need to map the header in this direction because of the thread affinity, you would only have to map on the return path.
Bear in mind, though, if the ThreadAffinityClientConnectionFactory detects that a connection has been closed, it will create a new one. So, you might want to call getConnection() in your mapper on each call. However, there would still be a race condition, so you might also need to listen for TcpConnectionCloseEvents and TcpConnectionOpenEvents.
If you use NIO on the inbound, or otherwise hand off the work to other threads via an executor, this won't work.
In that case you would need your own wrapping connection factory - you could use the ThreadAffinityClientConnectionFactory as a model, but instead of storing the connections in a ThreadLocal, you'd store them in a map. But you'd still need a ThreadLocal (set upstream on each call) to tell the factory which connection to hand out when the adapter asks for one.
There's a trick you need to be aware of, however.
There is a property singleUse on the connection factory. This serves 2 purposes;
first, it tells the factory to create a new connection each time getConnection() is called instead of a single, shared, connection
second, it tells the inbound adapter to close the connection after the reply is received
So the trick is you need singleUse=true on the real factory (so it gives you a new connection each time getConnection() is called), but singleUse=false on the wrapping factory so the adapters don't close the connection.
I suggest you look at the ThreadAffinityClientConnectionFactory and CachingClientConnectionFactory connection factory to see how they work.
We should probably consider splitting this into two booleans; we could probably also make some improvements to avoid the need for a thread local by adding something like getConnection(String connectionId) to the client factory contract and have the factory look up the connection internally; but that will require work in the adapters.
I'll capture an issue for this and see if we can get something in 5.2.
Rather a long answer, but I hope it makes sense.

Related

Use connection pool with MongoEngine

I have documents in different MongoDB databases referencing each other (mongoengine's LazyRefereneceField), so each time I need to get the field's value, I need to connect and disconnect from the field's relevant database, which I find very inefficient.
I've read about connection pooling, but I can't find a solution on how to implement it using MongoEngine. How can I create a connection pool and reuse connections from it every time I need to the value for a LazyReferenceField?
MongoEngine is managing the connection globally (i.e once connected, it auto-magically re-use that connection), usually you call connect just once, when the application/script starts and then you are good to go, and don't need to interfere with the connection.
LazyReferenceField is not different from any other field (ReferenceField, StringField, etc) in that context. The only difference is that it's not doing the de-referencing immediatly but only when you explicitly request it with the .fetch method

PostgreSQL: use same connection or get another from pool?

I have a Node.js script and a PostgreSQL database, and I'll be using a library that maintains a pool of connections to the database.
Say I have a script that queries the database multiple times (not a transaction) at different parts of the script, how do I tell if I should acquire a single connection/client and reuse it throughout*, or acquire a new client from the pool for each query? (Both works but which has better performance?)
*task in the pg-promise library, connect in the node-postgres library.
...
// Acquire connection from pool.
(Database query)
(Non-database-related code)
(Database query)
// Release connection to pool.
...
or
...
// Acquire connection from pool.
(Database query)
// Release connection to pool.
(Non-database-related code)
// Acquire connection from pool.
(Database query)
// Release connection to pool.
...
I am not sure, how the pool you are using works, but normally they should reuse the connections (don't disconnect after use), so you do not need to be concerned with caching connections.
You can use node-postgres module that will make you task easier.
And about your question when to use pool here is the brief answer.
PostgreSQL server can only handle 1 query at a time per connection.
That means if you have 1 global new pg.Client() connected to your
backend your entire app is bottleknecked based on how fast postgres
can respond to queries. It literally will line everything up, queuing
each query. Yeah, it's async and so that's alright...but wouldn't you
rather multiply your throughput by 10x? Use pg.connect set the
pg.defaults.poolSize to something sane (we do 25-100, not sure the
right number yet).
new pg.Client is for when you know what you're doing. When you need a
single long lived client for some reason or need to very carefully
control the life-cycle. A good example of this is when using
LISTEN/NOTIFY. The listening client needs to be around and connected
and not shared so it can properly handle NOTIFY messages. Other
example would be when opening up a 1-off client to kill some hung
stuff or in command line scripts.
here is the link of that module.
Hopefully this will help.
https://github.com/brianc/node-postgres
You can see the documentation over there and about the pooling. Thanks :)
And about closing the pool it provides the callback done which can be called when you want to close that pool.

How to prevent AMQP (RabbitMQ) message from being black holed when the connection to the broker dies?

For example, if there's a network outage and your producer loses connection to your RabbitMQ broke, how can you prevent messages from being black holed that need to be queued up? I have a few ideas one of them being to write all your messages to a local db and remove them once they're acked and periodically resend after some time period, but that only works if your connection factory is set to have the publisher confirm.
I'm just generating messages from my test application to simulate event logging. I'm essentially trying to create a durable producer. Is there a way to detect when you can reconnect to RabbitMQ also? I see there's a ConnectionListener interface, but it seems you cannot send messages to flush an internal queue in the ConnectionListener.
If you have a SimpleMessageListenerContainer (perhaps listening to a dummy queue) it will keep trying to reconnect (and fire the connection listener when successful). Or you can have a simple looper that calls createConnection() on the connection factory from time-to-time (it won't create a new connection each time, just return the single shared connection - if open); this will also fire the listener when a new connection is made.
You can use transactions instead of publisher confirms - but they're much slower due to the handshake. It depends on what your performance requirements are.

ReactiveMongo Connection, keep connection object alive in the Play context or re-establish for each call to the database? (Play, Scala, ReactiveMongo)

I am just starting to use ReactiveMongo with Play 2 (scala).
Should I store a singleton object with the connection details and a return of the database (connection.get.db("mydb")) or keep the connection alive indefinitely.
I am used to JDBC connection pools so am unsure what the performant way to use ReactiveMongo and Mongo is.
Sorry if this is not very well formed question, I am fumbling in the dark a bit.
Thanks
From this documentation
http://reactivemongo.org/releases/0.10/api/index.html#reactivemongo.api.MongoDriver
there is optional parameter
nbChannelsPerNode Number of channels to open per node. Defaults to 10.
This looks like that the returned object (MongoConnection) is connection pool itself. So you should use it as singleton and not create a new instances for each request.

JDBC: Can I share a connection in a multithreading app, and enjoy nice transactions?

It seems like the classical way to handle transactions with JDBC is to set auto-commit to false. This creates a new transaction, and each call to commit marks the beginning the next transactions.
On multithreading app, I understand that it is common practice to open a new connection for each thread.
I am writing a RMI based multi-client server application, so that basically my server is seamlessly spawning one thread for each new connection.
To handle transactions correctly should I go and create a new connection for each of those thread ?
Isn't the cost of such an architecture prohibitive?
Yes, in general you need to create a new connection for each thread. You don't have control over how the operating system timeslices execution of threads (notwithstanding defining your own critical sections), so you could inadvertently have multiple threads trying to send data down that one pipe.
Note the same applies to any network communications. If you had two threads trying to share one socket with an HTTP connection, for instance.
Thread 1 makes a request
Thread 2 makes a request
Thread 1 reads bytes from the socket, unwittingly reading the response from thread 2's request
If you wrapped all your transactions in critical sections, and therefore lock out any other threads for an entire begin/commit cycle, then you might be able to share a database connection between threads. But I wouldn't do that even then, unless you really have innate knowledge of the JDBC protocol.
If most of your threads have infrequent need for database connections (or no need at all), you might be able to designate one thread to do your database work, and have other threads queue their requests to that one thread. That would reduce the overhead of so many connections. But you'll have to figure out how to manage connections per thread in your environment (or ask another specific question about that on StackOverflow).
update: To answer your question in the comment, most database brands don't support multiple concurrent transactions on a single connection (InterBase/Firebird is the only exception I know of).
It'd be nice to have a separate transaction object, and to be able to start and commit multiple transactions per connection. But vendors simply don't support it.
Likewise, standard vendor-independent APIs like JDBC and ODBC make the same assumption, that transaction state is merely a property of the connection object.
It's uncommon practice to open a new connection for each thread.
Usually you use a connection pool like c3po library.
If you are in an application server, or using Hibernate for example, look at the documentation and you will find how to configure the connection pool.
The same connection object can be used to create multiple statement objects and these statement objects can then used by different threads concurrently. Most modern DBs interfaced by JDBC can do that. The JDBC is thus able to make use of concurrent cursors as follows. PostgreSQL is no exception here, see for example:
http://doc.postgresintl.com/jdbc/ch10.html
This allows connection pooling where the connection are only used for a short time, namely to created the statement object and but after that returned to the pool. This short time pooling is only recommended when the JDBC connection does also parallelization of statement operations, otherwise normal connection pooling might show better results. Anyhow the thread can continue work with the statement object and close it later, but not the connection.
1. Thread 1 opens statement
3. Thread 2 opens statement
4. Thread 1 does something Thread 2 does something
5. ... ...
6. Thread 1 closes statement ...
7. Thread 2 closes statement
The above only works in auto commit mode. If transactions are needed there is still no need to tie the transaction to a thread. You can just partition the pooling along the transactions that is all and use the same approach as above. But this is only needed not because of some socket connection limitation but because the JDBC then equates the session ID with the transaction ID.
If I remember well there should be APIs and products around with a less simplistic design, where teh session ID and the transaction ID are not equated. In this APIs you could write your server with one single database connection object, even when it does
transactions. Will need to check and tell you later what this APIs and products are.

Resources