I am new to connection management in jboss and hibernate.I have an application using spring + hibernate running on jboss 7.I did some reading but have few doubts now:
How are connections and threads related to access a application.
Suppose i have maximum pool size of 10.Does it mean only 10 threads
can access my application to perform database operations at a time?
If no, what happens when there are more than 10 threads say 15 or 20 accessing it?
Does other threads wait for running threads to complete and they run next?
(or)
This results in a connection error like no managed connections available?
Related
I implemented a connection pool (poolMin=poolMax=10) with node-oracledb and i saw a difference of up to 100 times especially in case of few users like 10. Really impressive. I also increased UV_THREADPOOL_SIZE like 4 + poolMax. At this point I could not understand somethings.
process.env.UV_THREADPOOL_SIZE = 4 + config.pool.poolMax // Default + Max
NodeJs works as Single Thread (with additional 4 threads those none of them are used for network I/O). So when i use a pool with 10 connections, can Single Thread use all of these connections? Or isn't it Single Thread with these settings anymore? Because i added 10 more to UV_THREADPOOL_SIZE. I would be grateful to anyone who explained this matter.
Btw, I wonder if using a fixed number pool like 10 would cause a problem in case of too many users? For example, if the number of instant users is 500, we can reach 5000 instant users on certain days of the year. Do I need to make a special setting (e.g. pool size 100) for those days or will the default be enough?
Thanks in advace.
When you do something like connection.execute(), that work will handled by a Node.js worker thread until the call completes. And each underlaying Oracle connection can only ever do one 'thing' (like execute, or fetch LOB data) at a time - this is a fundamental (i.e. insurmountable) behavior of Oracle connections.
For node-oracledb you want the number of worker threads to be at least as big as the number of connections in the connection pool, plus some extra for non database work. This allows connections to do their thing without blocking any other connection.
Any use of Promise.all() (and similar constructs) using a single connection should be assessed and considered for rewriting as a simple loop. Prior to node-oracledb 5.2, each of the 'parallel' operations for Promise.all() on a single connection will use a thread but this will be blocked waiting for prior work on the connection to complete, so you might need even more threads available. From 5.2 onwards any 'parallel' operations on a single connection will be queued in the JavaScript layer of node-oracledb and will be executed sequentially, so you will only need a worker thread per connection at most. In either version, using Promise.all() where each unit of work has its own connection is different, and only subject to the one-connection per thread requirements.
Check the node-oracledb documentation Connections, Threads, and Parallelism and Connection Pool Sizing.
Separate to how connections are used, first you have to get a connection. Node-oracledb will queue connection pool requests (e.g. pool.getConnection()) if every connection in the pool is already in use. This provides some resiliency under connection spikes. There are some limits to help real storms: queueMax and queueTimeout. Yes, at peak periods you might need to increase the poolMax value. You can check the pool statistics to see pool behavior. You don't want to make the pool too big - see the doc.
Side note: process.env.UV_THREADPOOL_SIZE doesn't have an effect in Node.js on Windows; the UV_THREADPOOL_SIZE variable must be set before Node.js is started.
I am using node-oracledb module for making connection and perform operation with oracle database
There are two approaches to make connection with oracle
connection-pool
concurrent threads (allows to connect with oracle whenever in need)
I am using second approach where I am creating standalone connection with oracle when in demand
A problem I am facing while making additional connection after successful concurrent 4 connections with oracle. Oracle is not allowing the 5th connection until all created connections become free.
Is there anyway to increase this thread count?
Here is the solution how can you increase thread pool size:
start your nodejs app with:
UV_THREADPOOL_SIZE=64 node myapp.js
OR
add the below line in myapp.js (starter file of node js)
process.env.UV_THREADPOOL_SIZE=64
Note: 64 is the size of thread pool
More information about Thread Pool
Node worker threads executing database statements on a connection will
commonly wait until round-trips between node-oracledb and the database
are complete.
Using worker_threads from node 12, is it suitable to establish remote connection within the workers and keep those connection alive ?
I don't mean sharing the socket between the master and the workers like we could do with node cluster and fork.
The idea would be to have pools of secure connections already established within the workers to use if needed.
Let say I have a pool of 10 workers. When a worker is created, some pre-established "TLS" connection are created (streams) to server X,Y amd Z, and the worker is marked as "ready"
Each time that I use a worker to process "heavy" tasks (mapReduce, etc, ) and if I need to post data or get data to/from server X,Y or Z during the process,
I use the appropriate "TLS" connection already established from the pool.
Once the task completed, the result is return to the master and the worker just execute a new/next tasks.
1 ) Do you see any side effect / impact of doing so ?
2 ) would it be better to have the pool of "TLS" connection on the "main thread" (master) . If "remote" data are needed within the workers during the tasks, use the "postMessage" method to communicate with the "master" ( and vice/versa ).
Thanks
Worker Threads do not work for remote connections. However, you can build your own system that would work similar using TLS sockets. In a case of such a system I would definitely recommend keeping these types of connections alive. There is a significant latency in setting up these connections, and having these connections active in memory, will use a minimum amount of resources.
Keep in mind that a system like this has some drawbacks:
You are working with different machines, and each of these machines can have its own set of failure conditions.
You are communicating over a network, connections with remote servers might suddenly drop, for any reason imaginable.
You are increasing the physical distance, this will cause latency.
So keep this in the back of your mind.
Would I recommend building a system like this. It is really hard to determine and it relies on your use case, time and money. You mentioned the cluster nodes are processing 'heavy tasks', and with that I reckon CPU / GPU intensive tasks. So a system like this might be a good solution, however, a simple rest API in front of your processing servers might be good enough. Or maybe even database synchronized servers, that just check the database for tasks to execute.
There are many solutions for the same problem, just have to consider what works best for your project(s).
I'm using Managed Executor Service to implement a process manager which will process tasks in the background upon receiving an JMS message event. Normally, there will be a small number of tasks running (maybe 10 max) but what if something happens and my application starts getting hundred of JMS message events. How do I handle such event?
My thought is to limit the number of threads if possible and save all the other messages to database and will be run when thread available. Thanks in advance.
My thought is to limit the number of threads if possible and save all the other messages to database and will be run when thread available.
The detailed answer to this question depends on which Java EE app server you choose to run on, since they all have slightly different configuration.
Any Java EE app server will allow you to configure the thread pool size of your Managed Executor Service (MES), this is the number of worker threads for your thread pool.
Say you have a 10 worker threads, and you get flooded with 100 requests all at once, the MES will keep a queue of requests that are backlogged, and the worker threads will take work off the queue whenever they finish work until the queue is empty.
Now, it's fine if work goes to the queue sometimes but if overall your work queue increases more quickly than your worker threads can take work off the queue, you will run into problems. The solution to this is to increase your thread pool size otherwise the backlog will get overrun and your server will run out of memory.
what if something happens and my application starts getting hundred of JMS message events. How do I handle such event?
If the load on your server will be so sporadic that tasks need to be saved to a database, it seems that the best approach would be to either:
increase thread pool size
have the server immediately reject incoming tasks when the task backlog queue is full
have clients do a blocking wait for the server task queue to be not full (I would only advise this option if client task submission is in no way connected to user experience)
I am using Spring 3.0.1 and Hibernate 3.2 with JBOSS 4.2.2 and we are using Spring transaction management to manage the transactions.
My code implementation runs a huge job that runs for nearly 10 minutes.The spring service bean RunJobBean.java is the entry point for my job and this instantiates a number of independent threads (each performing different DB updates and other logic etc) and these threads invokes the hibernate DAO beans (These are injected into RunJobBean which passes on to threads) to read from DB2 server and reads and writes data into two different Oracle databases (running on two different servers).
The bean StartRunJob.java does the necessary pre-processing and invokes RunJobBean to run the job.
This use to work fine until the recent change.
The bean StartRunJob.java (managed by another team. I have no control over this) has been modified recently to invoke multiple jobs in parallel. So StartRunJob invokes multiple independent threads and each of these threads invokes my RunJobBean. On running the StartRunJob, I am getting the below mentioned errors. The log shows this is from my code.
org.hibernate.exception.GenericJDBCException: Cannot open connection
Caused by: org.jboss.util.NestedSQLException: No ManagedConnections available within configured blocking timeout ( 30000 [ms] ); - nested throwable: (javax.resource.ResourceException: No ManagedConnections available within configured blocking timeout ( 30000 [ms] ))
The max number of connections configured on the server is 5 and min is 1. Everyone is under the impression that my code connecting to Oracle DB1 is eating off all the connections and not releasing them. THe JBOSS console shows InUseConnectionCount as 3 or 4 or 5. But still I am seeing this issue. But My code connecting to second OracleDB also has max connections as 5 but I am invoking 12 different threads to made DB calls and this works fine.
I want an advice on how I can getrid of this issue.
Thanks in advance.
Some questions related to this.
1. How can I check in JBOSS which bean is holding a db connection?
2. How can I check in JBOSS how many DB connections are idle?
I have solved this problem. Have identified a leak in the transaction.
Update: it has been long back I worked on this, But as I remember, In one of the transactions, a property has to be readonly where as it was assigned something similar to update, because of this multiple number of calls were fired by spring to DB. When we changed it to readonly, things were to normal.
But I still keep this question open for some expert to answer other questions so they will be helpful to someone.