HikariCP too many Connections with jooq due to that Hikari Cp Pool Initialisation Fail - jooq

I have service which is built in spring boot (2.4.5v) and its integrated with and jooq. When we perform the load in our service we are getting too many connection with Jooq.
I am using the DSL context like this.
public class CourseRepository {
#Autowired private DSLContext dslContext;
}
And I haven't provide explicit bean of DSLContext.
and Hikari CP configuration look like this,
spring.datasource.url=[My RDS URL]
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
spring.jooq.sql-dialect=MySQL
spring.datasource.type=com.zaxxer.hikari.HikariDataSource
spring.datasource.hikari.pool-name=[DB Name]
spring.datasource.hikari.initial-size=5
spring.datasource.hikari.maximum-pool-size=40
spring.datasource.hikari.minimum-idle=2
spring.datasource.hikari.idle-timeout=10000
spring.datasource.timeout=10000
Error Stack
error.class:java.sql.SQLNonTransientConnectionException
error.message:Too many connections
error.stack: at
com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:110) at
com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMappi
ng.java:122) at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:833)
at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:453) at
com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:246) at
com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:198) at
com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138) at
com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:358) at
com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:206) at
com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:477)
Another Error
error.class:org.springframework.dao.DataAccessResourceFailureException
error.message:jOOQ; SQL [Query]; Too many connections; nested exception is java.sql.SQLNonTransientConnectionException: Too many connections
After reading several links and post, I found we have to provide the DSLContext bean,
#Primary
#Bean(name = "dslContext")
public DSLContext dslContext(#Qualifier("dataSource") DataSource dataSource) {
this.dataSource = dataSource;
log.info("Datasource {}", dataSource);
org.jooq.Configuration config = new DefaultConfiguration();
config.set(dataSource);
config.set(SQLDialect.MYSQL);
return DSL.using(config);
}
After adding these configuration ], I am getting different error.
error.class:org.jooq.exception.DataAccessException
error.message:Error getting connection from data source HikariDataSource (CorsairServiceDB)
error.stack: at org.jooq_3.13.2.MYSQL.debug(Unknown Source) at org.jooq.impl.DataSourceConnectionProvider.acquire(DataSourceConnectionProvider.java:86) at org.jooq.impl.DefaultExecuteContext.connection(DefaultExecuteContext.java:647) at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:334) at org.jooq.impl.AbstractResultQuery.fetch(AbstractResultQuery.java:354) at org.jooq.impl.AbstractResultQuery.fetchInto(AbstractResultQuery.java:1550) at org.jooq.impl.SelectImpl.fetchInto(SelectImpl.java:3746) at com.xyz.dao.CourseRepository.getCoursesByIds(CourseRepository.java:799) at com.xyz.dao.CourseRepository$$FastClassBySpringCGLIB$$b22a30cd.invoke(<generated>) at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)

Related

Quarkus unable to load the cassandra custom retry policy class

I am working on a task to migrate Quarkus from 1.x to 2.x and Quarkus integration with embedded Cassandra failed in unit testing with error -
Caused by: java.lang.IllegalArgumentException: Can't find class com.mind.common.connectors.cassandra.CassandraCustomRetryPolicy
(specified by advanced.retry-policy.class)
**Custom retry policy**
public class CassandraCustomRetryPolicy implements RetryPolicy {
public CassandraCustomRetryPolicy(DriverContext context, String profileName) {
}
//override methods
}
****quarkus test be like** -**
#QuarkusTest
#QuarkusTestResource(CassandraTestResource.class)
class Test {}
**CassandraTestResource class start the embedded cassandra**
public class CassandraTestResource implements QuarkusTestResourceLifecycleManager {
private Cassandra cassandra;
#Override
public Map<String, String> start() {
cassandra = new CassandraBuilder().version("3.11.9")
.addEnvironmentVariable("JAVA_HOME", getJavaHome())
.addJvmOptions("-Xms512M -Xmx512m").build();
cassandra.start();
}
I have override the default Cassandra driver policy in application.conf inside resource folder.
datastax-java-driver {
basic.request {
timeout = ****
consistency = ***
serial-consistency = ***
}
advanced.retry-policy {
class = com.mind.common.connectors.cassandra.CassandraCustomRetryPolicy
}
I have observed that my custom retry policy class comes under banned resource in QuarkusClassLoader.java-
String resourceName = sanitizeName(name).replace('.', '/') + ".class";
boolean parentFirst = parentFirst(resourceName, state);
if (state.bannedResources.contains(resourceName)) {
throw new ClassNotFoundException(name);
}
I have captured the following logs -
java.lang.ClassNotFoundException: com.mind.common.connectors.cassandra.CassandraCustomRetryPolicy
at io.quarkus.bootstrap.classloading.QuarkusClassLoader.loadClass(QuarkusClassLoader.java:438)
at io.quarkus.bootstrap.classloading.QuarkusClassLoader.loadClass(QuarkusClassLoader.java:414)
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:315)
at com.datastax.oss.driver.internal.core.util.Reflection.loadClass(Reflection.java:57)
at com.datastax.oss.driver.internal.core.util.Reflection.resolveClass(Reflection.java:288)
at com.datastax.oss.driver.internal.core.util.Reflection.buildFromConfig(Reflection.java:235)
at com.datastax.oss.driver.internal.core.util.Reflection.buildFromConfigProfiles(Reflection.java:194)
at com.datastax.oss.driver.internal.core.context.DefaultDriverContext.buildRetryPolicies(DefaultDriverContext.java:359)
at com.datastax.oss.driver.internal.core.util.concurrent.LazyReference.get(LazyReference.java:55)
at com.datastax.oss.driver.internal.core.context.DefaultDriverContext.getRetryPolicies(DefaultDriverContext.java:761)
at com.datastax.oss.driver.internal.core.session.DefaultSession$SingleThreaded.init(DefaultSession.java:339)
at com.datastax.oss.driver.internal.core.session.DefaultSession$SingleThreaded.access$1100(DefaultSession.java:300)
at com.datastax.oss.driver.internal.core.session.DefaultSession.lambda$init$0(DefaultSession.java:146)
at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98)
at io.netty.util.concurrent.PromiseTask.run(PromiseTask.java:106)
at io.netty.channel.DefaultEventLoop.run(DefaultEventLoop.java:54)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
I am using quarkus version 2.7.2.Final with cassandra driver version 4.14.0
It's not a complete answer but I wanted to leave some notes here in case anybody else can get this over the finish line before I get back to it.
The underlying problem here is that in the Quarkus test case described above the Java driver code is loaded by the QuarkusClassLoader which (a) is more restrictive about where it loads code from and (b) doesn't appear to immediately support calling it's parent if necessary. So in this case executing the following in the test will fail with a ClassNotFoundException:
CqlSession.class.getClassLoader().forName(customretrypolicyclassname)
while the following works without issue:
CqlSession.class.getClassLoader().getParent().forName(customretrypolicyclassname)
The class loader used to load CqlSession is the QuarkusClassLoader instance while it's parent is a stock JVM class loader.
The Java driver uses Class.forName() to load the classes specified for this policy. But since the Quarkus class loader is used to load the driver code itself that's the loader that's used for these reflection ops... and as mentioned above that driver has some specific characteristics that make loading external code harder.
It worked after I initialized CQL session like -
CqlSession.builder()
.addContactPoint(new InetSocketAddress(settings.getAddress(), settings.getPort()))
.withLocalDatacenter("***")
. withClassLoader(Thread.currentThread().getContextClassLoader()).build())

Hazelcast Spring Session SubZero(Kryo) EntryBackupProcessorImpl NullPointerException issue

I am using hazelcast-3.11.2 and SubZero-0.9 as global serializer. I am trying to configure Spring Session using this example. When I have more than one node in cluster - I get next exception when trying to get session id:
2019-03-20 15:01:59.088 ERROR 13635 --- [ration.thread-3]
c.h.m.i.operation.EntryBackupOperation : [x.x.x.x]:5701
[hazelcast-group] [3.11.2] null
java.lang.NullPointerException: null at
com.hazelcast.map.AbstractEntryProcessor$EntryBackupProcessorImpl.processBackup(AbstractEntryProcessor.java:83)
at
com.hazelcast.map.impl.operation.EntryOperator.process(EntryOperator.java:314)
at
com.hazelcast.map.impl.operation.EntryOperator.operateOnKeyValueInternal(EntryOperator.java:181)
at
com.hazelcast.map.impl.operation.EntryOperator.operateOnKey(EntryOperator.java:166)
at
com.hazelcast.map.impl.operation.EntryBackupOperation.run(EntryBackupOperation.java:60)
at
com.hazelcast.spi.impl.operationservice.impl.operations.Backup.run(Backup.java:158)
at com.hazelcast.spi.Operation.call(Operation.java:170) at
com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.call(OperationRunnerImpl.java:208)
at
com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:197)
at
com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:413)
at
com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:153)
at
com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:123)
at
com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.run(OperationThread.java:110)
My instance config looks like this:
#Configuration
#EnableHazelcastHttpSession
public class HazelcastSessionConfig extends AbstractHttpSessionApplicationInitializer {
#Bean
public HazelcastInstance hazelcastInstance() {
Config config = new Config();
SubZero.useAsGlobalSerializer(config);
MapAttributeConfig attributeConfig = new MapAttributeConfig()
.setName(HazelcastSessionRepository.PRINCIPAL_NAME_ATTRIBUTE)
.setExtractor(PrincipalNameExtractor.class.getName());
config.getMapConfig(HazelcastSessionRepository.DEFAULT_SESSION_MAP_NAME)
.addMapAttributeConfig(attributeConfig)
.addMapIndexConfig(new MapIndexConfig(
HazelcastSessionRepository.PRINCIPAL_NAME_ATTRIBUTE, false));
return Hazelcast.newHazelcastInstance(config);
}
}
Removing SubZero from configuration, removes exception, so it looks like it is SubZero issue. I do use this instance as my cache provider also and hibernate second level cache, so I can not get rid off SubZero.
My thoughts were:
Having two different clusters: one for cache, another for session.
Don't work for me, since I do not know how to configure Spring
Session to use specific hazelcast instance (pass instance name, or
bean itself etc)
Specify which classes should be used with SubZero - but since I have
plenty and new classes going to be added - this is not the best idea
Will appreciate any help.

GlobalChannelInterceptor pass array of patterns

I am Spring Integration 4.3.13 and trying to pass patterns when configuring #GlobalChannelInterceptor
Here is the example
#Configuration
public class IntegrationConfig{
#Bean
#GlobalChannelInterceptor(patterns = "${spring.channel.interceptor.patterns:*}")
public ChannelInterceptor channelInterceptor(){
return new ChannelInterceptorImpl();
}
}
properties file has following values:
spring.channel.interceptor.patterns=*intchannel, *event
I am using direct channels with names that end with these two string
springintchannel
registrationevent
With the above config, both the channels should have interceptor configured but it is not getting configured.
The comma-separate value isn't support there currently.
I agree that we need to fix it, so feel free to raise a JIRA on the matter and we will file a solution from some other place.
Meanwhile you can do this as a workaround:
#Bean
public GlobalChannelInterceptorWrapper channelInterceptorWrapper(#Value("${spring.channel.interceptor.patterns:*}") String[] patterns) {
GlobalChannelInterceptorWrapper globalChannelInterceptorWrapper = new GlobalChannelInterceptorWrapper(channelInterceptor());
globalChannelInterceptorWrapper.setPatterns(patterns);
return globalChannelInterceptorWrapper;
}

GridGain with SpringBoot

I've compiled a docker image of GridGain Pro and run this.
with Java i do the following...
Create the following #Configuration file
#Configuration
#EnableCaching
public class CustomConfiguration extends CachingConfigurerSupport {
#Bean
#Override
public KeyGenerator keyGenerator() {
return (target, method, params) -> {
StringBuilder sb = new StringBuilder();
sb.append(target.getClass().getName());
sb.append(method.getName());
for (Object obj : params) {
sb.append("|");
sb.append(obj.toString());
}
return sb.toString();
};
}
#Bean("cacheManager")
public SpringCacheManager cacheManager(IgniteConfiguration igniteConfiguration){
try {
SpringCacheManager springCacheManager = new SpringCacheManager();
springCacheManager.setIgniteInstanceName("ignite");
springCacheManager.setConfiguration(igniteConfiguration);
springCacheManager.setDynamicCacheConfiguration(new CacheConfiguration<>().setCacheMode(CacheMode.REPLICATED));
return springCacheManager;
}
catch (Exception ex){
}
return null;
}
#Bean
#Profile("!dev")
IgniteConfiguration igniteConfiguration() {
GridGainConfiguration gridGainConfiguration = new GridGainConfiguration();
gridGainConfiguration.setRollingUpdatesEnabled(true);
IgniteConfiguration igniteConfiguration = new IgniteConfiguration()
.setPluginConfigurations(gridGainConfiguration)
.setClientMode(true)
.setPeerClassLoadingEnabled(false)
.setIgniteInstanceName("MyIgnite");
DataStorageConfiguration dataStorageConfiguration = new DataStorageConfiguration();
DataRegionConfiguration dataRegionConfiguration = new DataRegionConfiguration();
dataRegionConfiguration.setInitialSize(20 * 1024 * 1024);
dataRegionConfiguration.setMaxSize(40 * 1024 * 1024);
dataRegionConfiguration.setMetricsEnabled(true);
dataStorageConfiguration.setDefaultDataRegionConfiguration(dataRegionConfiguration);
igniteConfiguration.setDataStorageConfiguration(dataStorageConfiguration);
TcpDiscoverySpi tcpDiscoverySpi = new TcpDiscoverySpi();
TcpDiscoveryVmIpFinder tcpDiscoveryVmIpFinder = new TcpDiscoveryVmIpFinder();
tcpDiscoveryVmIpFinder.setAddresses(Arrays.asList("192.168.99.100:47500..47502"));
tcpDiscoverySpi.setIpFinder(tcpDiscoveryVmIpFinder);
igniteConfiguration.setDiscoverySpi(tcpDiscoverySpi);
return igniteConfiguration;
}
}
Start spring and get the following error.
2018-04-18 12:27:29.277 WARN 12588 --- [ main] .GridEntDiscoveryNodeValidationProcessor : GridGain node cannot be in one cluster with Ignite node [locNodeAddrs=[server/0:0:0:0:0:0:0:1, server/10.29.96.164, server/127.0.0.1, /192.168.56.1, /192.168.99.1], rmtNodeAddrs=[172.17.0.1/0:0:0:0:0:0:0:1%lo, 192.168.99.100/10.0.2.15, 10.0.2.15/127.0.0.1, /172.17.0.1, /192.168.99.100]]
2018-04-18 12:27:29.283 ERROR 12588 --- [ main] o.a.i.internal.IgniteKernal%MyIgnite : Got exception while starting (will rollback startup routine).
I'm trying to use gridgain as a replacement for redis and use the #Cacheable annotation.
Does anyone have a working gridgain example?
What is causing the error above?
G.
1) okay seems the issue was not providing H2 as a dependency.
2) using GridGain professional instead of GridGain Enterprise.
G.
GridGain node cannot be in one cluster with Ignite node is pretty self-explanatory.
Either you have forgot to stop some local Apache Ignite from earlier experiments.
Or you have deliberately tried to make GridGain join an Ignite cluster.
Or better yet, there is an instance of Apache Ignite running somewhere in your local network, and you have set multicast discovery or other kind of too-broad discovery, so they're seeing each other.
Maybe gridgain-core.x.x.x.jar jar is miising from one of nodes' classpath. Check and add it if necessary.

CRM 2011 PLUGIN to update another entity

My PLUGIN is firing on Entity A and in my code I am invoking a web service that returns an XML file with some attributes (attr1,attr2,attr3 etc ...) for Entity B including GUID.
I need to update Entity B using the attributes I received from the web service.
Can I use Service Context Class (SaveChanges) or what is the best way to accomplish my task please?
I would appreciate it if you provide an example.
There is no reason you need to use a service context in this instance. Here is basic example of how I would solve this requirement. You'll obviously need to update this code to use the appropriate entities, implement your external web service call, and handle the field updates. In addition, this does not have any error checking or handling as should be included for production code.
I made an assumption you were using the early-bound entity classes, if not you'll need to update the code to use the generic Entity().
class UpdateAnotherEntity : IPlugin
{
private const string TARGET = "Target";
public void Execute(IServiceProvider serviceProvider)
{
//PluginSetup is an abstraction from: http://nicknow.net/dynamics-crm-2011-abstracting-plugin-setup/
var p = new PluginSetup(serviceProvider);
var target = ((Entity) p.Context.InputParameters[TARGET]).ToEntity<Account>();
var updateEntityAndXml = GetRelatedRecordAndXml(target);
var relatedContactEntity =
p.Service.Retrieve(Contact.EntityLogicalName, updateEntityAndXml.Item1, new ColumnSet(true)).ToEntity<Contact>();
UpdateContactEntityWithXml(relatedContactEntity, updateEntityAndXml.Item2);
p.Service.Update(relatedContactEntity);
}
private static void UpdateContactEntityWithXml(Contact relatedEntity, XmlDocument xmlDocument)
{
throw new NotImplementedException("UpdateContactEntityWithXml");
}
private static Tuple<Guid, XmlDocument> GetRelatedRecordAndXml(Account target)
{
throw new NotImplementedException("GetRelatedRecordAndXml");
}
}

Resources