I have a spring boot application with postgresql and rabbitmq. I wanted to use a best-effort JTA transaction that contains both a postgres and rabbitmq transaction.
I have added the spring-boot-started-jta-atomikos dependency. When I start my application I receive this warning multiple times:
atomikos connection proxy for Pooled connection wrapping physical connection org.postgresql.jdbc.PgConnection#99c4993: WARNING: transaction manager not running?
Do I need any additional configuration?
I also get this warning at startup:
AtomikosDataSoureBean 'dataSource': poolSize equals default - this may cause performance problems!
I run with the following settings, but setMinPoolSize is never called
spring.jta.atomikos.connectionfactory.max-pool-size: 10
spring.jta.atomikos.connectionfactory.min-pool-size: 5
The documentation at:
https://docs.spring.io/spring-boot/docs/current/reference/html/features.html#features.jta.atomikos
https://www.atomikos.com/Documentation/SpringBootIntegration
just says I can add the starter dependency. But it seems like spring boot doesn't properly auto-configure some things.
spring.jta.atomikos.connectionfactory properties are for controlling a JMS connectionFactory.
You should use the spring.jta.atomikos.datasource properties for controlling JDBC DataSource configuration.
Related
For example, would these attempts be recorded as part of a trace session in system_traces.sessions or system_traces.events?
Edit: The driver I'm using is called gocql
In the Java driver, there is a logging retry policy which can act as a parent policy for another retry policy - it should log the decision of retrying.
In the gocql driver though looking at the query executor, I cannot see an explicit log regardless of retry - only one of the retry mechanisms appears to have logging, the DowngradingConsistencyRetryPolicy. If debug is set it will log the downgrade.
Unhealthy event: SourceId='System.FabricDnsService', Property='Environment', HealthState='Warning', ConsiderWarningAsError=false.
FabricDnsService is not preferred DNS server on the node.
Wondering if anyone has a place on where to start on getting this warning in azure fabric?
Looks this was an issue on sf v6.0 and now it has beed fixed in v6.1
https://github.com/Azure/service-fabric-issues/issues/496
For now to workaround this you should turn OFF all your network connections except one major, reset a local cluster, redeploy the app.
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-manifest
We are using Spring Batch and partitioned job extensively with our project. Occasionally we see problems with partitioned jobs getting "hung" because of what apepars to be lost messages. The remote partitions all complete but the parent step stays in STARTED. Our configuration uses 1 connection factory for reading messages from the queues (inbound gateway) and a different clustered connection to send out the partition messages (outbound gateway). The reason for this is the JBoss messaging doesnt uniformly distribute messages around the cluster and the client connection factory provides that functionality.
Redhat came in and frankly threw mud at Spring and the configuration. The following are excerpts from their report
The Spring JMSTemplate code employs several anti-patterns, like creating a new connection, session, producer just to send a message, then closing the connection. Also, when receiving a message it can create a consumer each time,
receive the message, then close the consumer. This can results in poor performance under load. The use of anti-patterns not only results in poor performance, but can deplete operating system resources such as
threads and file handles, since some of the connection resources are released asynchronously. Moreover, with non-durable topic subscribers you can end up losing messages, since any messages received between the closing of
the last and opening of the next consumer will be lost. There is one place where it may be acceptable to use the Spring JMSTemplate is inside the application server using the JCA managed connection factory (normally at "java:/JmsXA") and that only works when you're sending messages.
The JCA managed connection factory caches connections so they will not actually be created each time. However using the JCA managed connection factory will not resolve the issue with consumers since they are not cached.
In summary, the Spring JMSTemplate is not safe to use apart from the very specific use case of using it inside the application server with the JCA managed connection factory (java:/JmsXA) and only in that case to send messages
(do not use it to consume messages).
Using it from a JMS client application outside the application server is never safe, and using it with a standard connection factory (e.g. "ConnectionFactory," "ClusteredConnectionFactory", "jms/RemoteConnectionFactory," etc.) is
never safe; also using it to receive messages is never safe. To safely receive messages using Spring, consider the use of MessageListenerContainers [7] with MessageDriven Pojos [8].
Finally, note that issues encountered are based on JMS anti-patterns and is thus not a problem specific to JBoss EAP. For example, see a similar discussion with regard to ActiveMQ [9].
Red Hat does not support using the Spring JMSTemplate with JBoss Messaging apart from the one acceptable use case for sending message via JCA managed connection factory.
RECOMMENDATIONS
● As to Spring JMS, as a rule, use JCA managed connection factories configured in JBoss EAP. Do not use the Spring configured connection factories. Use JNDI template to pull in the connection factories to Spring from JBoss. This will get rid of most of the Spring JMS problems.
● Use standard JMS instead of Spring JMS for the batch job. Spring is a non-standard (and probably sub-standard implementation of JMS). Standard JMS uses a pool of a few senders to send the message and close the session after the message is sent. On the listener side, standard JMS uses a pool of works listening to a distributed Queue or Topic. Each web server has JMS listener deployed as singleton and uses standard java observer to
notify any caller that is expecting a call back.
The JMS connection factories are configured in JBoss and loaded via JNDI.
Can you provide your feedback on their assessment?
To avoid the overhead of creating new connections/sessions per send, you need to wrap the provider's connection factory in a CachingConnectionFactory. It reuses the same connection for sends and caches sessions, producers, consumers.
I am using Spring 3.0.1 and Hibernate 3.2 with JBOSS 4.2.2 and we are using Spring transaction management to manage the transactions.
My code implementation runs a huge job that runs for nearly 10 minutes.The spring service bean RunJobBean.java is the entry point for my job and this instantiates a number of independent threads (each performing different DB updates and other logic etc) and these threads invokes the hibernate DAO beans (These are injected into RunJobBean which passes on to threads) to read from DB2 server and reads and writes data into two different Oracle databases (running on two different servers).
The bean StartRunJob.java does the necessary pre-processing and invokes RunJobBean to run the job.
This use to work fine until the recent change.
The bean StartRunJob.java (managed by another team. I have no control over this) has been modified recently to invoke multiple jobs in parallel. So StartRunJob invokes multiple independent threads and each of these threads invokes my RunJobBean. On running the StartRunJob, I am getting the below mentioned errors. The log shows this is from my code.
org.hibernate.exception.GenericJDBCException: Cannot open connection
Caused by: org.jboss.util.NestedSQLException: No ManagedConnections available within configured blocking timeout ( 30000 [ms] ); - nested throwable: (javax.resource.ResourceException: No ManagedConnections available within configured blocking timeout ( 30000 [ms] ))
The max number of connections configured on the server is 5 and min is 1. Everyone is under the impression that my code connecting to Oracle DB1 is eating off all the connections and not releasing them. THe JBOSS console shows InUseConnectionCount as 3 or 4 or 5. But still I am seeing this issue. But My code connecting to second OracleDB also has max connections as 5 but I am invoking 12 different threads to made DB calls and this works fine.
I want an advice on how I can getrid of this issue.
Thanks in advance.
Some questions related to this.
1. How can I check in JBOSS which bean is holding a db connection?
2. How can I check in JBOSS how many DB connections are idle?
I have solved this problem. Have identified a leak in the transaction.
Update: it has been long back I worked on this, But as I remember, In one of the transactions, a property has to be readonly where as it was assigned something similar to update, because of this multiple number of calls were fired by spring to DB. When we changed it to readonly, things were to normal.
But I still keep this question open for some expert to answer other questions so they will be helpful to someone.
We have our application running on a Sun Solaris system and have a local WebSphere MQ installation. The applcation uses bindings mode to connect to queue manager. When trying to send message to the local queue, the JNDI binding is successfull but we encounter javax.jms.JMSSecurityException: MQJMS2013: invalid security authentication supplied for MQQueueManager error. When investigated found that the credentials (userid) used for authentication is not case sensitive as the user on which the application is running. The userid matches but it is not a case sensitive match. By default the user on which the application is running will be passed for authentication, but here the case sensitive match is failing. The application server is WebLogic. Appreciate any inputs.
In order to open the local queue, the application must have first connected to the queue manager successfully. The error on the remote queue is a connection error so it is not even getting to the queue manager. This suggests that you are using different connection factories and that the second one has some differences in the connectivity parameters. First step is to reconcile those differences.
Also, a MQJMS2013 Security Error can be many things, most of which are not actually MQ issues. For example some people store their managed objects in LDAP and an authentication problem there will throw this error. For people who use a filesystem-based JNDI, OS file permissions can cause the same thing. However if it is an actual WMQ issue (which this appears to be) then the linked exception will contain the MQ Reason Code (for example, MQRC=2035). If you want to be able to better diagnose MQ (or for that matter any JMS transport) issues, it pays to get in the habit of printing linked exceptions.
If you are not able to resolve this issue based on this input, I would advise updating the question with details of the managed object definitions and the reason code obtained from printing the linked exceptions.
We were using createQueueConnection() in QueueConnectionFactory for creating the connection and the issue got resolved by using the method createQueueConnection("",""). The unix userid (webA) is case sensitive and the application was trying to authenticate on the MQ with the case insensitive userid (weba) and MQ queue manager was rejecting the connection attempt. Can you tell us why the application was sending the case insensitive userid (weba) earlier?
Thanks,
Arun