Hazelcast Spring Session SubZero(Kryo) EntryBackupProcessorImpl NullPointerException issue - hazelcast

I am using hazelcast-3.11.2 and SubZero-0.9 as global serializer. I am trying to configure Spring Session using this example. When I have more than one node in cluster - I get next exception when trying to get session id:
2019-03-20 15:01:59.088 ERROR 13635 --- [ration.thread-3]
c.h.m.i.operation.EntryBackupOperation : [x.x.x.x]:5701
[hazelcast-group] [3.11.2] null
java.lang.NullPointerException: null at
com.hazelcast.map.AbstractEntryProcessor$EntryBackupProcessorImpl.processBackup(AbstractEntryProcessor.java:83)
at
com.hazelcast.map.impl.operation.EntryOperator.process(EntryOperator.java:314)
at
com.hazelcast.map.impl.operation.EntryOperator.operateOnKeyValueInternal(EntryOperator.java:181)
at
com.hazelcast.map.impl.operation.EntryOperator.operateOnKey(EntryOperator.java:166)
at
com.hazelcast.map.impl.operation.EntryBackupOperation.run(EntryBackupOperation.java:60)
at
com.hazelcast.spi.impl.operationservice.impl.operations.Backup.run(Backup.java:158)
at com.hazelcast.spi.Operation.call(Operation.java:170) at
com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.call(OperationRunnerImpl.java:208)
at
com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:197)
at
com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:413)
at
com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:153)
at
com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:123)
at
com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.run(OperationThread.java:110)
My instance config looks like this:
#Configuration
#EnableHazelcastHttpSession
public class HazelcastSessionConfig extends AbstractHttpSessionApplicationInitializer {
#Bean
public HazelcastInstance hazelcastInstance() {
Config config = new Config();
SubZero.useAsGlobalSerializer(config);
MapAttributeConfig attributeConfig = new MapAttributeConfig()
.setName(HazelcastSessionRepository.PRINCIPAL_NAME_ATTRIBUTE)
.setExtractor(PrincipalNameExtractor.class.getName());
config.getMapConfig(HazelcastSessionRepository.DEFAULT_SESSION_MAP_NAME)
.addMapAttributeConfig(attributeConfig)
.addMapIndexConfig(new MapIndexConfig(
HazelcastSessionRepository.PRINCIPAL_NAME_ATTRIBUTE, false));
return Hazelcast.newHazelcastInstance(config);
}
}
Removing SubZero from configuration, removes exception, so it looks like it is SubZero issue. I do use this instance as my cache provider also and hibernate second level cache, so I can not get rid off SubZero.
My thoughts were:
Having two different clusters: one for cache, another for session.
Don't work for me, since I do not know how to configure Spring
Session to use specific hazelcast instance (pass instance name, or
bean itself etc)
Specify which classes should be used with SubZero - but since I have
plenty and new classes going to be added - this is not the best idea
Will appreciate any help.

Related

Quarkus unable to load the cassandra custom retry policy class

I am working on a task to migrate Quarkus from 1.x to 2.x and Quarkus integration with embedded Cassandra failed in unit testing with error -
Caused by: java.lang.IllegalArgumentException: Can't find class com.mind.common.connectors.cassandra.CassandraCustomRetryPolicy
(specified by advanced.retry-policy.class)
**Custom retry policy**
public class CassandraCustomRetryPolicy implements RetryPolicy {
public CassandraCustomRetryPolicy(DriverContext context, String profileName) {
}
//override methods
}
****quarkus test be like** -**
#QuarkusTest
#QuarkusTestResource(CassandraTestResource.class)
class Test {}
**CassandraTestResource class start the embedded cassandra**
public class CassandraTestResource implements QuarkusTestResourceLifecycleManager {
private Cassandra cassandra;
#Override
public Map<String, String> start() {
cassandra = new CassandraBuilder().version("3.11.9")
.addEnvironmentVariable("JAVA_HOME", getJavaHome())
.addJvmOptions("-Xms512M -Xmx512m").build();
cassandra.start();
}
I have override the default Cassandra driver policy in application.conf inside resource folder.
datastax-java-driver {
basic.request {
timeout = ****
consistency = ***
serial-consistency = ***
}
advanced.retry-policy {
class = com.mind.common.connectors.cassandra.CassandraCustomRetryPolicy
}
I have observed that my custom retry policy class comes under banned resource in QuarkusClassLoader.java-
String resourceName = sanitizeName(name).replace('.', '/') + ".class";
boolean parentFirst = parentFirst(resourceName, state);
if (state.bannedResources.contains(resourceName)) {
throw new ClassNotFoundException(name);
}
I have captured the following logs -
java.lang.ClassNotFoundException: com.mind.common.connectors.cassandra.CassandraCustomRetryPolicy
at io.quarkus.bootstrap.classloading.QuarkusClassLoader.loadClass(QuarkusClassLoader.java:438)
at io.quarkus.bootstrap.classloading.QuarkusClassLoader.loadClass(QuarkusClassLoader.java:414)
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:315)
at com.datastax.oss.driver.internal.core.util.Reflection.loadClass(Reflection.java:57)
at com.datastax.oss.driver.internal.core.util.Reflection.resolveClass(Reflection.java:288)
at com.datastax.oss.driver.internal.core.util.Reflection.buildFromConfig(Reflection.java:235)
at com.datastax.oss.driver.internal.core.util.Reflection.buildFromConfigProfiles(Reflection.java:194)
at com.datastax.oss.driver.internal.core.context.DefaultDriverContext.buildRetryPolicies(DefaultDriverContext.java:359)
at com.datastax.oss.driver.internal.core.util.concurrent.LazyReference.get(LazyReference.java:55)
at com.datastax.oss.driver.internal.core.context.DefaultDriverContext.getRetryPolicies(DefaultDriverContext.java:761)
at com.datastax.oss.driver.internal.core.session.DefaultSession$SingleThreaded.init(DefaultSession.java:339)
at com.datastax.oss.driver.internal.core.session.DefaultSession$SingleThreaded.access$1100(DefaultSession.java:300)
at com.datastax.oss.driver.internal.core.session.DefaultSession.lambda$init$0(DefaultSession.java:146)
at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98)
at io.netty.util.concurrent.PromiseTask.run(PromiseTask.java:106)
at io.netty.channel.DefaultEventLoop.run(DefaultEventLoop.java:54)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
I am using quarkus version 2.7.2.Final with cassandra driver version 4.14.0
It's not a complete answer but I wanted to leave some notes here in case anybody else can get this over the finish line before I get back to it.
The underlying problem here is that in the Quarkus test case described above the Java driver code is loaded by the QuarkusClassLoader which (a) is more restrictive about where it loads code from and (b) doesn't appear to immediately support calling it's parent if necessary. So in this case executing the following in the test will fail with a ClassNotFoundException:
CqlSession.class.getClassLoader().forName(customretrypolicyclassname)
while the following works without issue:
CqlSession.class.getClassLoader().getParent().forName(customretrypolicyclassname)
The class loader used to load CqlSession is the QuarkusClassLoader instance while it's parent is a stock JVM class loader.
The Java driver uses Class.forName() to load the classes specified for this policy. But since the Quarkus class loader is used to load the driver code itself that's the loader that's used for these reflection ops... and as mentioned above that driver has some specific characteristics that make loading external code harder.
It worked after I initialized CQL session like -
CqlSession.builder()
.addContactPoint(new InetSocketAddress(settings.getAddress(), settings.getPort()))
.withLocalDatacenter("***")
. withClassLoader(Thread.currentThread().getContextClassLoader()).build())

HikariCP too many Connections with jooq due to that Hikari Cp Pool Initialisation Fail

I have service which is built in spring boot (2.4.5v) and its integrated with and jooq. When we perform the load in our service we are getting too many connection with Jooq.
I am using the DSL context like this.
public class CourseRepository {
#Autowired private DSLContext dslContext;
}
And I haven't provide explicit bean of DSLContext.
and Hikari CP configuration look like this,
spring.datasource.url=[My RDS URL]
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
spring.jooq.sql-dialect=MySQL
spring.datasource.type=com.zaxxer.hikari.HikariDataSource
spring.datasource.hikari.pool-name=[DB Name]
spring.datasource.hikari.initial-size=5
spring.datasource.hikari.maximum-pool-size=40
spring.datasource.hikari.minimum-idle=2
spring.datasource.hikari.idle-timeout=10000
spring.datasource.timeout=10000
Error Stack
error.class:java.sql.SQLNonTransientConnectionException
error.message:Too many connections
error.stack: at
com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:110) at
com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMappi
ng.java:122) at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:833)
at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:453) at
com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:246) at
com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:198) at
com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138) at
com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:358) at
com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:206) at
com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:477)
Another Error
error.class:org.springframework.dao.DataAccessResourceFailureException
error.message:jOOQ; SQL [Query]; Too many connections; nested exception is java.sql.SQLNonTransientConnectionException: Too many connections
After reading several links and post, I found we have to provide the DSLContext bean,
#Primary
#Bean(name = "dslContext")
public DSLContext dslContext(#Qualifier("dataSource") DataSource dataSource) {
this.dataSource = dataSource;
log.info("Datasource {}", dataSource);
org.jooq.Configuration config = new DefaultConfiguration();
config.set(dataSource);
config.set(SQLDialect.MYSQL);
return DSL.using(config);
}
After adding these configuration ], I am getting different error.
error.class:org.jooq.exception.DataAccessException
error.message:Error getting connection from data source HikariDataSource (CorsairServiceDB)
error.stack: at org.jooq_3.13.2.MYSQL.debug(Unknown Source) at org.jooq.impl.DataSourceConnectionProvider.acquire(DataSourceConnectionProvider.java:86) at org.jooq.impl.DefaultExecuteContext.connection(DefaultExecuteContext.java:647) at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:334) at org.jooq.impl.AbstractResultQuery.fetch(AbstractResultQuery.java:354) at org.jooq.impl.AbstractResultQuery.fetchInto(AbstractResultQuery.java:1550) at org.jooq.impl.SelectImpl.fetchInto(SelectImpl.java:3746) at com.xyz.dao.CourseRepository.getCoursesByIds(CourseRepository.java:799) at com.xyz.dao.CourseRepository$$FastClassBySpringCGLIB$$b22a30cd.invoke(<generated>) at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)

jHipster gateway downstream prefix

I'm updating old jHipster gateway to 7.5.0 version. New version uses Spring Cloud Gateway (with Eureka), while the old one used Zuul. In previous version working with Service Discovery having service named 'foo' and path 'bar' would register it without any prefix on the gateway so it could be accessed as:
GATEWAY_URL/foo/bar
right now all services register with 'services/' prefix which results requires to call following url:
GATEWAY_URL/services/foo/bar
I can't find configuration responsible for that. I found a property spring.webservices.path, but changing this to other value does not make any change and in Spring Boot 2.6.3 its value cannot be empty or '/' (but Im not sure if this is a property I should be checking). I also experimented with spring.cloud.gateway.routes in form:
spring:
webservices:
path: /test
main:
allow-bean-definition-overriding: true
cloud:
gateway:
discovery:
locator:
enabled: true
routes:
- id: user-service-route
uri: lb://user
predicates:
- Path=/user/**
but without any luck. Also Im not sure if this is jHipster issue or SCG
I need to change that so that other systems using my API won't need to update their paths, I know I can always add nginx before so that it will rewrite te path but that feels not correct.
This behavior is done by SCG autoconfiguration - GatewayDiscoveryClientAutoConfiguration, it registers DiscoveryLocatorProperties bean with predicate:
PredicateDefinition{name='Path', args={pattern='/'+serviceId+'/**'}}
I didnt want to change autoconfigration, so I did WebFilter that is executed as first one and mutates request path
public class ServicesFilter implements WebFilter {
private final ServicesMappingConfigration mapping;
public ServicesFilter(ServicesMappingConfigration mapping) {
this.mapping = mapping;
}
#Override
public Mono<Void> filter(ServerWebExchange exchange, WebFilterChain chain) {
RequestPath path = exchange.getRequest().getPath();
if (path.elements().size() > 1) {
PathContainer pathContainer = path.subPath(1, 2);
if (mapping.getServices().contains(pathContainer.value())) {
ServerHttpRequest mutatedRequest = exchange
.getRequest()
.mutate()
.path("/services" + exchange.getRequest().getPath())
.build();
ServerWebExchange mutatedExchange = exchange.mutate().request(mutatedRequest).build();
return chain.filter(mutatedExchange);
}
}
return chain.filter(exchange);
}}

Mockito how can simule record doesn't exist

.exception.INSSTaxNotFoundException: INSS Tax not found with ID 1
Could someone help me?
I want mokite "inssTaxService.findById", I don't know how do.
I get this error: INSSTaxNotFoundException: INSS Tax not found with ID 1.
But I could like found the record and go on.
Can I do that in Service or Not?
#Test
void whenINSSTaxIdInformedThenReturnThisINSSTax() throws INSSTaxNotFoundException {
INSSTaxDTO expectedSavedInssTaxDTO = INSSTaxBuilder.builder().build().toINSSTaxDTO();
INSSTax expectedSavedInssTax = inssTaxMapper.toModel(expectedSavedInssTaxDTO);
when(inssTaxService.findById(expectedSavedInssTaxDTO.getId())).
thenReturn(expectedSavedInssTaxDTO);
assertEquals(expectedSavedInssTax.getId(), expectedSavedInssTaxDTO.getId());
assertEquals(expectedSavedInssTax.getDescription(), expectedSavedInssTaxDTO.getDescription());
assertEquals(expectedSavedInssTax.getSocialSecurityRatePercent(), expectedSavedInssTaxDTO.getSocialSecurityRatePercent());
}
What you might be missing is actually injecting the mock of inssTaxService inside your class which you are testing,
Your code would be something like this. Considering its a pure java code(not spring boot etc, you can change the code accordingly in that case).
Mock the service(Which i feel you have done else Mockito would have thrown and error)
InssTaxService mockedInssTaxService = Mockito.mock(InssTaxService.class);
//other impl on this mock for this e.g
when(mockedInssTaxService.findById(expectedSavedInssTaxDTO.getId())).
thenReturn(expectedSavedInssTaxDTO);
Inject the mocked object to the ClassToTest.
ClassToTest classToTest = new ClassToTest(mockedInssTaxService);
If you are using spring boot test you can use #MockBean or #Mock and #InjectMocks instead of new keyword

WeldClientProxy cannot be cast

I use the Unbound ID library that has a class LDAPConnection which has no default constructor and which implements LDAPInterface. I produce the LDAPConnection as follows:
#Produces
#SimpleLdapConnection
#ApplicationScoped
public LDAPInterface createLdapConnection() throws GeneralSecurityException, LDAPException {
LDAPConnection conn = new LDAPConnection(host, port, username, password);
return conn;
}
I now want to inject this LDAPConnection class to a second producer, which should generate a Connection Pool:
#Inject
#SimpleLdapConnection
LDAPInterface simpleLdapConnection;
#Produces
#Default
#ApplicationScoped
public LDAPInterface produceLdapConnectionPool() throws GeneralSecurityException, LDAPException {
LDAPConnectionPool pool = new LDAPConnectionPool((LDAPConnection)simpleLdapConnection.g, connectionPoolInitialSize, connectionPoolMaxSize);
return pool;
}
To create the LDAPConnectionPool, I need to cast the simpleLdapConnection to an LDAPConnection (as it must be an LDAPConnection).
However, I get the error:
java.lang.ClassCastException:
org.jboss.weld.proxies.LDAPInterface$1687649628$Proxy$_$$_WeldClientProxy
cannot be cast to com.unboundid.ldap.sdk.LDAPConnection
at
at.rsg.lp.benutzerverwaltung.business.repository.LdapConnectionPoolProvider.produceLdapConnectionPool(LdapConnectionPoolProvider.java:59)
How can I get around this error?
P.S. changing the first producer to return an LDAPConnection does not work as I get the error "Injected normal scoped bean is not proxyable".
What you are running into, from CDI point of view, are the defined bean types of a producer method. This is backed by CDI specification.
In short, for producer methods, the bean types are derived from return types and the interfaces it implements. E.g. the actual implementation type is not included. The reason for that is exactly what you see when you saw when you tried to return the actual implementation type - impls often contain final methods or other bumps making them unproxyable.
There are two things I can think of to solve this:
[This one is likely to fail] Try putting #Typed annotation on your producer - I doubt it will work in this case, but it could be worth a shot. This annotation declares all the types the bean will have. You would use it like this - #Typed({LDAPInterface, LDAPConnection}).
[This should be a go-to option] If I were you, I would create a wrapper object just like you suggested. It won't really be all that ugly, just few bits and pieces of code should do the trick.

Resources