I am working on a task to migrate Quarkus from 1.x to 2.x and Quarkus integration with embedded Cassandra failed in unit testing with error -
Caused by: java.lang.IllegalArgumentException: Can't find class com.mind.common.connectors.cassandra.CassandraCustomRetryPolicy
(specified by advanced.retry-policy.class)
**Custom retry policy**
public class CassandraCustomRetryPolicy implements RetryPolicy {
public CassandraCustomRetryPolicy(DriverContext context, String profileName) {
}
//override methods
}
****quarkus test be like** -**
#QuarkusTest
#QuarkusTestResource(CassandraTestResource.class)
class Test {}
**CassandraTestResource class start the embedded cassandra**
public class CassandraTestResource implements QuarkusTestResourceLifecycleManager {
private Cassandra cassandra;
#Override
public Map<String, String> start() {
cassandra = new CassandraBuilder().version("3.11.9")
.addEnvironmentVariable("JAVA_HOME", getJavaHome())
.addJvmOptions("-Xms512M -Xmx512m").build();
cassandra.start();
}
I have override the default Cassandra driver policy in application.conf inside resource folder.
datastax-java-driver {
basic.request {
timeout = ****
consistency = ***
serial-consistency = ***
}
advanced.retry-policy {
class = com.mind.common.connectors.cassandra.CassandraCustomRetryPolicy
}
I have observed that my custom retry policy class comes under banned resource in QuarkusClassLoader.java-
String resourceName = sanitizeName(name).replace('.', '/') + ".class";
boolean parentFirst = parentFirst(resourceName, state);
if (state.bannedResources.contains(resourceName)) {
throw new ClassNotFoundException(name);
}
I have captured the following logs -
java.lang.ClassNotFoundException: com.mind.common.connectors.cassandra.CassandraCustomRetryPolicy
at io.quarkus.bootstrap.classloading.QuarkusClassLoader.loadClass(QuarkusClassLoader.java:438)
at io.quarkus.bootstrap.classloading.QuarkusClassLoader.loadClass(QuarkusClassLoader.java:414)
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:315)
at com.datastax.oss.driver.internal.core.util.Reflection.loadClass(Reflection.java:57)
at com.datastax.oss.driver.internal.core.util.Reflection.resolveClass(Reflection.java:288)
at com.datastax.oss.driver.internal.core.util.Reflection.buildFromConfig(Reflection.java:235)
at com.datastax.oss.driver.internal.core.util.Reflection.buildFromConfigProfiles(Reflection.java:194)
at com.datastax.oss.driver.internal.core.context.DefaultDriverContext.buildRetryPolicies(DefaultDriverContext.java:359)
at com.datastax.oss.driver.internal.core.util.concurrent.LazyReference.get(LazyReference.java:55)
at com.datastax.oss.driver.internal.core.context.DefaultDriverContext.getRetryPolicies(DefaultDriverContext.java:761)
at com.datastax.oss.driver.internal.core.session.DefaultSession$SingleThreaded.init(DefaultSession.java:339)
at com.datastax.oss.driver.internal.core.session.DefaultSession$SingleThreaded.access$1100(DefaultSession.java:300)
at com.datastax.oss.driver.internal.core.session.DefaultSession.lambda$init$0(DefaultSession.java:146)
at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98)
at io.netty.util.concurrent.PromiseTask.run(PromiseTask.java:106)
at io.netty.channel.DefaultEventLoop.run(DefaultEventLoop.java:54)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
I am using quarkus version 2.7.2.Final with cassandra driver version 4.14.0
It's not a complete answer but I wanted to leave some notes here in case anybody else can get this over the finish line before I get back to it.
The underlying problem here is that in the Quarkus test case described above the Java driver code is loaded by the QuarkusClassLoader which (a) is more restrictive about where it loads code from and (b) doesn't appear to immediately support calling it's parent if necessary. So in this case executing the following in the test will fail with a ClassNotFoundException:
CqlSession.class.getClassLoader().forName(customretrypolicyclassname)
while the following works without issue:
CqlSession.class.getClassLoader().getParent().forName(customretrypolicyclassname)
The class loader used to load CqlSession is the QuarkusClassLoader instance while it's parent is a stock JVM class loader.
The Java driver uses Class.forName() to load the classes specified for this policy. But since the Quarkus class loader is used to load the driver code itself that's the loader that's used for these reflection ops... and as mentioned above that driver has some specific characteristics that make loading external code harder.
It worked after I initialized CQL session like -
CqlSession.builder()
.addContactPoint(new InetSocketAddress(settings.getAddress(), settings.getPort()))
.withLocalDatacenter("***")
. withClassLoader(Thread.currentThread().getContextClassLoader()).build())
Related
I have service which is built in spring boot (2.4.5v) and its integrated with and jooq. When we perform the load in our service we are getting too many connection with Jooq.
I am using the DSL context like this.
public class CourseRepository {
#Autowired private DSLContext dslContext;
}
And I haven't provide explicit bean of DSLContext.
and Hikari CP configuration look like this,
spring.datasource.url=[My RDS URL]
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
spring.jooq.sql-dialect=MySQL
spring.datasource.type=com.zaxxer.hikari.HikariDataSource
spring.datasource.hikari.pool-name=[DB Name]
spring.datasource.hikari.initial-size=5
spring.datasource.hikari.maximum-pool-size=40
spring.datasource.hikari.minimum-idle=2
spring.datasource.hikari.idle-timeout=10000
spring.datasource.timeout=10000
Error Stack
error.class:java.sql.SQLNonTransientConnectionException
error.message:Too many connections
error.stack: at
com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:110) at
com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMappi
ng.java:122) at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:833)
at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:453) at
com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:246) at
com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:198) at
com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138) at
com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:358) at
com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:206) at
com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:477)
Another Error
error.class:org.springframework.dao.DataAccessResourceFailureException
error.message:jOOQ; SQL [Query]; Too many connections; nested exception is java.sql.SQLNonTransientConnectionException: Too many connections
After reading several links and post, I found we have to provide the DSLContext bean,
#Primary
#Bean(name = "dslContext")
public DSLContext dslContext(#Qualifier("dataSource") DataSource dataSource) {
this.dataSource = dataSource;
log.info("Datasource {}", dataSource);
org.jooq.Configuration config = new DefaultConfiguration();
config.set(dataSource);
config.set(SQLDialect.MYSQL);
return DSL.using(config);
}
After adding these configuration ], I am getting different error.
error.class:org.jooq.exception.DataAccessException
error.message:Error getting connection from data source HikariDataSource (CorsairServiceDB)
error.stack: at org.jooq_3.13.2.MYSQL.debug(Unknown Source) at org.jooq.impl.DataSourceConnectionProvider.acquire(DataSourceConnectionProvider.java:86) at org.jooq.impl.DefaultExecuteContext.connection(DefaultExecuteContext.java:647) at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:334) at org.jooq.impl.AbstractResultQuery.fetch(AbstractResultQuery.java:354) at org.jooq.impl.AbstractResultQuery.fetchInto(AbstractResultQuery.java:1550) at org.jooq.impl.SelectImpl.fetchInto(SelectImpl.java:3746) at com.xyz.dao.CourseRepository.getCoursesByIds(CourseRepository.java:799) at com.xyz.dao.CourseRepository$$FastClassBySpringCGLIB$$b22a30cd.invoke(<generated>) at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
I have created a custom mapper class as below but ABP does not automatically register and use it while mapping.
https://docs.abp.io/en/abp/4.4/Object-To-Object-Mapping#iobjectmapper-tsource-tdestination-interface
Sorry for less detail, i have added some below,
I have found that mycustommapperclass's interface different from my object mapper,
should I implement for all container types?
public class HierachyItemCustomMapper : IObjectMapper<HierachyItem, HierachyItemDto>, ITransientDependency
{
and my usage like
var nodeListDto = ObjectMapper.Map<IEnumerable<HierachyItem>, IEnumerable<HierachyItemDto>>(nodeList);
How can i handle this?
Obviously I am looking for a result instead of foreach iterator loop.
Edit:
it have found that it is known issue as below
https://github.com/abpframework/abp/issues/94
I've tried just before and it seems it works as expected.
This is my HierachyItemCustomMapper class which I've created in the Application layer. (It should be created in the Application layer.)
public class HierachyItemCustomMapper : IObjectMapper<HierachyItem, HierachyItemDto>, ITransientDependency
{
public HierachyItemDto Map(HierachyItem source)
{
return new HierachyItemDto
{
Name = source.Name
};
}
public HierachyItemDto Map(HierachyItem source, HierachyItemDto destination)
{
destination.Name = source.Name;
return destination;
}
}
I've just added a property named Name in my both classes (HierachyItem and HierachyItemDto) to test.
You probably didn't define it in the Application layer and that cause the problem. Can you check it?
It's simple , your defination is wrong
it should be like that
public class HierachyItemCustomMapper : IObjectMapper<IEnumerable<HierachyItem>,
IEnumerable<HierachyItemDto>>, ITransientDependency {}
as it searches for exact defination match , and if you want to add also capability of using ObjectMapper.Map<HierachyItem, HierachyItemDto>
you can make your custom mapper defination like that
public class HierachyItemCustomMapper : IObjectMapper<IEnumerable<HierachyItem>,
IEnumerable<HierachyItemDto>>, IObjectMapper<HierachyItem, HierachyItemDto> ,
ITransientDependency {}
and you will implement both
good luck
I am currently working on a new project and chose Cassandra as our data store.
I have a use case where I store prices for material and to accomplish it I created list of User-Defined Types (UDTs). But unfortunately, while deserialization using datastax driver. After queries for the required data, I found that the list object is null while in the database there is value for it. Is it a current limitation of Cassandra java driver or am I missing something?
This is how my simplified Entity (table) looks like:
#PrimaryKeyColumn(name = "tenant_id", ordinal = 0, type = PrimaryKeyType.PARTITIONED)
private long tenantId;
#PrimaryKeyColumn(name = "item_id", ordinal = 1, type = PrimaryKeyType.CLUSTERED)
private String itemId;
#CassandraType(type = DataType.Name.LIST, userTypeName = "volume_scale_1")
private List<VolumeScale> volumeScale1;
}
So I am getting volumeScale1 as null after database select query.
And this is how my UDT looks like:
In Cassandra database:
CREATE TYPE pricingservice.volume_scale (
from_scale int,
to_scale int,
value frozen<price_value>
);
As UDT in java :
#UserDefinedType("volume_scale")
public class VolumeScale
{
#CassandraType(type = DataType.Name.TEXT, userTypeName = "from_scale")
#Column("from_scale")
private String fromScale;
#CassandraType(type = DataType.Name.TEXT, userTypeName = "to_scale")
#Column("to_scale")
private String toScale;
#CassandraType(type = DataType.Name.UDT, userTypeName = "value")
private PriceValue value;
// getter and setter
}
I also tried using Object Mapper from java driver itself as per #Alex suggestion but got stuck at one point where creating an object using ItemPriceByMaterialMapperBuilder is throwing compilation error. Is anything additional required towards annotation processing or am I missing something? do you have any idea how to use Mapper annotation? I used google AutoService also to achieve annotation processing externally but didn't work.
#Mapper
//#AutoService(Processor.class)
public interface ItemPriceByMaterialMapper
// extends Processor
{
static MapperBuilder<ItemPriceByMaterialMapper> builder(CqlSession session) {
return new ItemPriceByMaterialMapperBuilder(session);
}
#DaoFactory
ItemPriceByMaterialDao itemPriceByMaterialDao ();
// #DaoFactory
// ItemPriceByMaterialDao itemPriceByMaterialDao(#DaoKeyspace CqlIdentifier
// keyspace);
}
Version used:
Java Version: 1.8
DataStax OSS java-driver-mapper-processor: 4.5.1
DataStax OSS java-driver-mapper-runtime: 4.5.1
Cassandra: 3.11.4
Spring Boot Framework: 2.2.4.RELEASE
From what I understand, you have multiple problems: if you're using Spring Data Cassandra, then you'll get older driver (3.7.2 for Spring 2.2.6-RELEASE), and it may clash with driver 4.0.0 (it's too old, don't use it) that you're trying to use. Driver 4.x isn't binary compatible with previous drivers, and its support in Spring Data Cassandra could be only in the next major release of Spring.
Instead of Spring Data you can use Object Mapper from java driver itself - it could be more optimized than Spring version.
I decided not to use object mapper and work with Spring Data Cassandra with Spring 2.2.6-RELEASE. Thanks
I am using hazelcast-3.11.2 and SubZero-0.9 as global serializer. I am trying to configure Spring Session using this example. When I have more than one node in cluster - I get next exception when trying to get session id:
2019-03-20 15:01:59.088 ERROR 13635 --- [ration.thread-3]
c.h.m.i.operation.EntryBackupOperation : [x.x.x.x]:5701
[hazelcast-group] [3.11.2] null
java.lang.NullPointerException: null at
com.hazelcast.map.AbstractEntryProcessor$EntryBackupProcessorImpl.processBackup(AbstractEntryProcessor.java:83)
at
com.hazelcast.map.impl.operation.EntryOperator.process(EntryOperator.java:314)
at
com.hazelcast.map.impl.operation.EntryOperator.operateOnKeyValueInternal(EntryOperator.java:181)
at
com.hazelcast.map.impl.operation.EntryOperator.operateOnKey(EntryOperator.java:166)
at
com.hazelcast.map.impl.operation.EntryBackupOperation.run(EntryBackupOperation.java:60)
at
com.hazelcast.spi.impl.operationservice.impl.operations.Backup.run(Backup.java:158)
at com.hazelcast.spi.Operation.call(Operation.java:170) at
com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.call(OperationRunnerImpl.java:208)
at
com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:197)
at
com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:413)
at
com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:153)
at
com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:123)
at
com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.run(OperationThread.java:110)
My instance config looks like this:
#Configuration
#EnableHazelcastHttpSession
public class HazelcastSessionConfig extends AbstractHttpSessionApplicationInitializer {
#Bean
public HazelcastInstance hazelcastInstance() {
Config config = new Config();
SubZero.useAsGlobalSerializer(config);
MapAttributeConfig attributeConfig = new MapAttributeConfig()
.setName(HazelcastSessionRepository.PRINCIPAL_NAME_ATTRIBUTE)
.setExtractor(PrincipalNameExtractor.class.getName());
config.getMapConfig(HazelcastSessionRepository.DEFAULT_SESSION_MAP_NAME)
.addMapAttributeConfig(attributeConfig)
.addMapIndexConfig(new MapIndexConfig(
HazelcastSessionRepository.PRINCIPAL_NAME_ATTRIBUTE, false));
return Hazelcast.newHazelcastInstance(config);
}
}
Removing SubZero from configuration, removes exception, so it looks like it is SubZero issue. I do use this instance as my cache provider also and hibernate second level cache, so I can not get rid off SubZero.
My thoughts were:
Having two different clusters: one for cache, another for session.
Don't work for me, since I do not know how to configure Spring
Session to use specific hazelcast instance (pass instance name, or
bean itself etc)
Specify which classes should be used with SubZero - but since I have
plenty and new classes going to be added - this is not the best idea
Will appreciate any help.
I am using Kryo serialization in Spark (v1.6.1) in Java and while serializing a class which has a collection in its field, it throws the following error -
Caused by: java.lang.UnsupportedOperationException
at java.util.Collections$UnmodifiableCollection.add(Collections.java:1055)
at com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:102)
at com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:18)
at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:648)
at com.esotericsoftware.kryo.serializers.FieldSerializer$ObjectField.read(FieldSerializer.java:605)
... 27 more
I found out that this is because the default CollectionSerializer of Kryo can not deserialize the collection, because its not modifiable and we should use UnmodifiableCollectionsSerializer instead.
How do I mention specifically in spark code to use UnmodifiableCollectionsSerializer for Kryo?
My current configuration is -
SparkConf conf = new SparkConf().setAppName("ABC");
conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer");
conf.registerKryoClasses(new Class<?>[] {*list of classes I want to register*});
In case anybody else face this issue, here is the solution - I got it working by using javakaffee kryo serializers.
Add the following maven dependency:
<dependency>
<groupId>de.javakaffee</groupId>
<artifactId>kryo-serializers</artifactId>
<version>0.42</version>
</dependency>
Write a custom kryo registrator to register UnmodifiableCollectionsSerializer
public class CustomKryoRegistrator implements KryoRegistrator {
#Override
public void registerClasses(Kryo kryo) {
UnmodifiableCollectionsSerializer.registerSerializers(kryo);
}
}
Set spark.kryo.registrator to the custom registrator's fully-qualified name
conf.set("spark.kryo.registrator", "com.abc.CustomKryoRegistrator");
References -
https://github.com/magro/kryo-serializers
Spark Kryo: Register a custom serializer