GridGain with SpringBoot - gridgain

I've compiled a docker image of GridGain Pro and run this.
with Java i do the following...
Create the following #Configuration file
#Configuration
#EnableCaching
public class CustomConfiguration extends CachingConfigurerSupport {
#Bean
#Override
public KeyGenerator keyGenerator() {
return (target, method, params) -> {
StringBuilder sb = new StringBuilder();
sb.append(target.getClass().getName());
sb.append(method.getName());
for (Object obj : params) {
sb.append("|");
sb.append(obj.toString());
}
return sb.toString();
};
}
#Bean("cacheManager")
public SpringCacheManager cacheManager(IgniteConfiguration igniteConfiguration){
try {
SpringCacheManager springCacheManager = new SpringCacheManager();
springCacheManager.setIgniteInstanceName("ignite");
springCacheManager.setConfiguration(igniteConfiguration);
springCacheManager.setDynamicCacheConfiguration(new CacheConfiguration<>().setCacheMode(CacheMode.REPLICATED));
return springCacheManager;
}
catch (Exception ex){
}
return null;
}
#Bean
#Profile("!dev")
IgniteConfiguration igniteConfiguration() {
GridGainConfiguration gridGainConfiguration = new GridGainConfiguration();
gridGainConfiguration.setRollingUpdatesEnabled(true);
IgniteConfiguration igniteConfiguration = new IgniteConfiguration()
.setPluginConfigurations(gridGainConfiguration)
.setClientMode(true)
.setPeerClassLoadingEnabled(false)
.setIgniteInstanceName("MyIgnite");
DataStorageConfiguration dataStorageConfiguration = new DataStorageConfiguration();
DataRegionConfiguration dataRegionConfiguration = new DataRegionConfiguration();
dataRegionConfiguration.setInitialSize(20 * 1024 * 1024);
dataRegionConfiguration.setMaxSize(40 * 1024 * 1024);
dataRegionConfiguration.setMetricsEnabled(true);
dataStorageConfiguration.setDefaultDataRegionConfiguration(dataRegionConfiguration);
igniteConfiguration.setDataStorageConfiguration(dataStorageConfiguration);
TcpDiscoverySpi tcpDiscoverySpi = new TcpDiscoverySpi();
TcpDiscoveryVmIpFinder tcpDiscoveryVmIpFinder = new TcpDiscoveryVmIpFinder();
tcpDiscoveryVmIpFinder.setAddresses(Arrays.asList("192.168.99.100:47500..47502"));
tcpDiscoverySpi.setIpFinder(tcpDiscoveryVmIpFinder);
igniteConfiguration.setDiscoverySpi(tcpDiscoverySpi);
return igniteConfiguration;
}
}
Start spring and get the following error.
2018-04-18 12:27:29.277 WARN 12588 --- [ main] .GridEntDiscoveryNodeValidationProcessor : GridGain node cannot be in one cluster with Ignite node [locNodeAddrs=[server/0:0:0:0:0:0:0:1, server/10.29.96.164, server/127.0.0.1, /192.168.56.1, /192.168.99.1], rmtNodeAddrs=[172.17.0.1/0:0:0:0:0:0:0:1%lo, 192.168.99.100/10.0.2.15, 10.0.2.15/127.0.0.1, /172.17.0.1, /192.168.99.100]]
2018-04-18 12:27:29.283 ERROR 12588 --- [ main] o.a.i.internal.IgniteKernal%MyIgnite : Got exception while starting (will rollback startup routine).
I'm trying to use gridgain as a replacement for redis and use the #Cacheable annotation.
Does anyone have a working gridgain example?
What is causing the error above?
G.

1) okay seems the issue was not providing H2 as a dependency.
2) using GridGain professional instead of GridGain Enterprise.
G.

GridGain node cannot be in one cluster with Ignite node is pretty self-explanatory.
Either you have forgot to stop some local Apache Ignite from earlier experiments.
Or you have deliberately tried to make GridGain join an Ignite cluster.
Or better yet, there is an instance of Apache Ignite running somewhere in your local network, and you have set multicast discovery or other kind of too-broad discovery, so they're seeing each other.
Maybe gridgain-core.x.x.x.jar jar is miising from one of nodes' classpath. Check and add it if necessary.

Related

Quarkus unable to load the cassandra custom retry policy class

I am working on a task to migrate Quarkus from 1.x to 2.x and Quarkus integration with embedded Cassandra failed in unit testing with error -
Caused by: java.lang.IllegalArgumentException: Can't find class com.mind.common.connectors.cassandra.CassandraCustomRetryPolicy
(specified by advanced.retry-policy.class)
**Custom retry policy**
public class CassandraCustomRetryPolicy implements RetryPolicy {
public CassandraCustomRetryPolicy(DriverContext context, String profileName) {
}
//override methods
}
****quarkus test be like** -**
#QuarkusTest
#QuarkusTestResource(CassandraTestResource.class)
class Test {}
**CassandraTestResource class start the embedded cassandra**
public class CassandraTestResource implements QuarkusTestResourceLifecycleManager {
private Cassandra cassandra;
#Override
public Map<String, String> start() {
cassandra = new CassandraBuilder().version("3.11.9")
.addEnvironmentVariable("JAVA_HOME", getJavaHome())
.addJvmOptions("-Xms512M -Xmx512m").build();
cassandra.start();
}
I have override the default Cassandra driver policy in application.conf inside resource folder.
datastax-java-driver {
basic.request {
timeout = ****
consistency = ***
serial-consistency = ***
}
advanced.retry-policy {
class = com.mind.common.connectors.cassandra.CassandraCustomRetryPolicy
}
I have observed that my custom retry policy class comes under banned resource in QuarkusClassLoader.java-
String resourceName = sanitizeName(name).replace('.', '/') + ".class";
boolean parentFirst = parentFirst(resourceName, state);
if (state.bannedResources.contains(resourceName)) {
throw new ClassNotFoundException(name);
}
I have captured the following logs -
java.lang.ClassNotFoundException: com.mind.common.connectors.cassandra.CassandraCustomRetryPolicy
at io.quarkus.bootstrap.classloading.QuarkusClassLoader.loadClass(QuarkusClassLoader.java:438)
at io.quarkus.bootstrap.classloading.QuarkusClassLoader.loadClass(QuarkusClassLoader.java:414)
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:315)
at com.datastax.oss.driver.internal.core.util.Reflection.loadClass(Reflection.java:57)
at com.datastax.oss.driver.internal.core.util.Reflection.resolveClass(Reflection.java:288)
at com.datastax.oss.driver.internal.core.util.Reflection.buildFromConfig(Reflection.java:235)
at com.datastax.oss.driver.internal.core.util.Reflection.buildFromConfigProfiles(Reflection.java:194)
at com.datastax.oss.driver.internal.core.context.DefaultDriverContext.buildRetryPolicies(DefaultDriverContext.java:359)
at com.datastax.oss.driver.internal.core.util.concurrent.LazyReference.get(LazyReference.java:55)
at com.datastax.oss.driver.internal.core.context.DefaultDriverContext.getRetryPolicies(DefaultDriverContext.java:761)
at com.datastax.oss.driver.internal.core.session.DefaultSession$SingleThreaded.init(DefaultSession.java:339)
at com.datastax.oss.driver.internal.core.session.DefaultSession$SingleThreaded.access$1100(DefaultSession.java:300)
at com.datastax.oss.driver.internal.core.session.DefaultSession.lambda$init$0(DefaultSession.java:146)
at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98)
at io.netty.util.concurrent.PromiseTask.run(PromiseTask.java:106)
at io.netty.channel.DefaultEventLoop.run(DefaultEventLoop.java:54)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
I am using quarkus version 2.7.2.Final with cassandra driver version 4.14.0
It's not a complete answer but I wanted to leave some notes here in case anybody else can get this over the finish line before I get back to it.
The underlying problem here is that in the Quarkus test case described above the Java driver code is loaded by the QuarkusClassLoader which (a) is more restrictive about where it loads code from and (b) doesn't appear to immediately support calling it's parent if necessary. So in this case executing the following in the test will fail with a ClassNotFoundException:
CqlSession.class.getClassLoader().forName(customretrypolicyclassname)
while the following works without issue:
CqlSession.class.getClassLoader().getParent().forName(customretrypolicyclassname)
The class loader used to load CqlSession is the QuarkusClassLoader instance while it's parent is a stock JVM class loader.
The Java driver uses Class.forName() to load the classes specified for this policy. But since the Quarkus class loader is used to load the driver code itself that's the loader that's used for these reflection ops... and as mentioned above that driver has some specific characteristics that make loading external code harder.
It worked after I initialized CQL session like -
CqlSession.builder()
.addContactPoint(new InetSocketAddress(settings.getAddress(), settings.getPort()))
.withLocalDatacenter("***")
. withClassLoader(Thread.currentThread().getContextClassLoader()).build())

HikariCP too many Connections with jooq due to that Hikari Cp Pool Initialisation Fail

I have service which is built in spring boot (2.4.5v) and its integrated with and jooq. When we perform the load in our service we are getting too many connection with Jooq.
I am using the DSL context like this.
public class CourseRepository {
#Autowired private DSLContext dslContext;
}
And I haven't provide explicit bean of DSLContext.
and Hikari CP configuration look like this,
spring.datasource.url=[My RDS URL]
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
spring.jooq.sql-dialect=MySQL
spring.datasource.type=com.zaxxer.hikari.HikariDataSource
spring.datasource.hikari.pool-name=[DB Name]
spring.datasource.hikari.initial-size=5
spring.datasource.hikari.maximum-pool-size=40
spring.datasource.hikari.minimum-idle=2
spring.datasource.hikari.idle-timeout=10000
spring.datasource.timeout=10000
Error Stack
error.class:java.sql.SQLNonTransientConnectionException
error.message:Too many connections
error.stack: at
com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:110) at
com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMappi
ng.java:122) at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:833)
at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:453) at
com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:246) at
com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:198) at
com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138) at
com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:358) at
com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:206) at
com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:477)
Another Error
error.class:org.springframework.dao.DataAccessResourceFailureException
error.message:jOOQ; SQL [Query]; Too many connections; nested exception is java.sql.SQLNonTransientConnectionException: Too many connections
After reading several links and post, I found we have to provide the DSLContext bean,
#Primary
#Bean(name = "dslContext")
public DSLContext dslContext(#Qualifier("dataSource") DataSource dataSource) {
this.dataSource = dataSource;
log.info("Datasource {}", dataSource);
org.jooq.Configuration config = new DefaultConfiguration();
config.set(dataSource);
config.set(SQLDialect.MYSQL);
return DSL.using(config);
}
After adding these configuration ], I am getting different error.
error.class:org.jooq.exception.DataAccessException
error.message:Error getting connection from data source HikariDataSource (CorsairServiceDB)
error.stack: at org.jooq_3.13.2.MYSQL.debug(Unknown Source) at org.jooq.impl.DataSourceConnectionProvider.acquire(DataSourceConnectionProvider.java:86) at org.jooq.impl.DefaultExecuteContext.connection(DefaultExecuteContext.java:647) at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:334) at org.jooq.impl.AbstractResultQuery.fetch(AbstractResultQuery.java:354) at org.jooq.impl.AbstractResultQuery.fetchInto(AbstractResultQuery.java:1550) at org.jooq.impl.SelectImpl.fetchInto(SelectImpl.java:3746) at com.xyz.dao.CourseRepository.getCoursesByIds(CourseRepository.java:799) at com.xyz.dao.CourseRepository$$FastClassBySpringCGLIB$$b22a30cd.invoke(<generated>) at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)

Hazelcast Spring Session SubZero(Kryo) EntryBackupProcessorImpl NullPointerException issue

I am using hazelcast-3.11.2 and SubZero-0.9 as global serializer. I am trying to configure Spring Session using this example. When I have more than one node in cluster - I get next exception when trying to get session id:
2019-03-20 15:01:59.088 ERROR 13635 --- [ration.thread-3]
c.h.m.i.operation.EntryBackupOperation : [x.x.x.x]:5701
[hazelcast-group] [3.11.2] null
java.lang.NullPointerException: null at
com.hazelcast.map.AbstractEntryProcessor$EntryBackupProcessorImpl.processBackup(AbstractEntryProcessor.java:83)
at
com.hazelcast.map.impl.operation.EntryOperator.process(EntryOperator.java:314)
at
com.hazelcast.map.impl.operation.EntryOperator.operateOnKeyValueInternal(EntryOperator.java:181)
at
com.hazelcast.map.impl.operation.EntryOperator.operateOnKey(EntryOperator.java:166)
at
com.hazelcast.map.impl.operation.EntryBackupOperation.run(EntryBackupOperation.java:60)
at
com.hazelcast.spi.impl.operationservice.impl.operations.Backup.run(Backup.java:158)
at com.hazelcast.spi.Operation.call(Operation.java:170) at
com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.call(OperationRunnerImpl.java:208)
at
com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:197)
at
com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:413)
at
com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:153)
at
com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:123)
at
com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.run(OperationThread.java:110)
My instance config looks like this:
#Configuration
#EnableHazelcastHttpSession
public class HazelcastSessionConfig extends AbstractHttpSessionApplicationInitializer {
#Bean
public HazelcastInstance hazelcastInstance() {
Config config = new Config();
SubZero.useAsGlobalSerializer(config);
MapAttributeConfig attributeConfig = new MapAttributeConfig()
.setName(HazelcastSessionRepository.PRINCIPAL_NAME_ATTRIBUTE)
.setExtractor(PrincipalNameExtractor.class.getName());
config.getMapConfig(HazelcastSessionRepository.DEFAULT_SESSION_MAP_NAME)
.addMapAttributeConfig(attributeConfig)
.addMapIndexConfig(new MapIndexConfig(
HazelcastSessionRepository.PRINCIPAL_NAME_ATTRIBUTE, false));
return Hazelcast.newHazelcastInstance(config);
}
}
Removing SubZero from configuration, removes exception, so it looks like it is SubZero issue. I do use this instance as my cache provider also and hibernate second level cache, so I can not get rid off SubZero.
My thoughts were:
Having two different clusters: one for cache, another for session.
Don't work for me, since I do not know how to configure Spring
Session to use specific hazelcast instance (pass instance name, or
bean itself etc)
Specify which classes should be used with SubZero - but since I have
plenty and new classes going to be added - this is not the best idea
Will appreciate any help.

Read & write data into cassandra using apache flink Java API

I intend to use apache flink for read/write data into cassandra using flink. I was hoping to use flink-connector-cassandra, I don't find good documentation/examples for the connector.
Can you please point me to the right way for read and write data from cassandra using Apache Flink. I see only sink example which are purely for write ? Is apache flink meant for reading data too from cassandra similar to apache spark ?
I had the same question, and this is what I was looking for. I don't know if it is over simplified for what you need, but figured I should show it none the less.
ClusterBuilder cb = new ClusterBuilder() {
#Override
public Cluster buildCluster(Cluster.Builder builder) {
return builder.addContactPoint("urlToUse.com").withPort(9042).build();
}
};
CassandraInputFormat<Tuple2<String, String>> cassandraInputFormat = new CassandraInputFormat<>("SELECT * FROM example.cassandraconnectorexample", cb);
cassandraInputFormat.configure(null);
cassandraInputFormat.open(null);
Tuple2<String, String> testOutputTuple = new Tuple2<>();
cassandraInputFormat.nextRecord(testOutputTuple);
System.out.println("column1: " + testOutputTuple.f0);
System.out.println("column2: " + testOutputTuple.f1);
The way I figured this out was thanks to finding the code for the "CassandraInputFormat" class and seeing how it worked (http://www.javatips.net/api/flink-master/flink-connectors/flink-connector-cassandra/src/main/java/org/apache/flink/batch/connectors/cassandra/CassandraInputFormat.java). I honestly expected it to just be a format and not the full class of reading from Cassandra based on the name, and I have a feeling others might be thinking the same thing.
ClusterBuilder cb = new ClusterBuilder() {
#Override
public Cluster buildCluster(Cluster.Builder builder) {
return builder.addContactPoint("localhost").withPort(9042).build();
}
};
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
StreamTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(env);
InputFormat inputFormat = new CassandraInputFormat<Tuple3<Integer, Integer, Integer>>("SELECT * FROM test.example;", cb);//, TypeInformation.of(Tuple3.class));
DataStreamSource t = env.createInput(inputFormat, TupleTypeInfo.of(new TypeHint<Tuple3<Integer, Integer,Integer>>() {}));
tableEnv.registerDataStream("t1",t);
Table t2 = tableEnv.sql("select * from t1");
t2.printSchema();
You can use RichFlatMapFunction to extend a class
class MongoMapper extends RichFlatMapFunction[JsonNode,JsonNode]{
var userCollection: MongoCollection[Document] = _
override def open(parameters: Configuration): Unit = {
// do something here like opening connection
val client: MongoClient = MongoClient("mongodb://localhost:10000")
userCollection = client.getDatabase("gp_stage").getCollection("users").withReadPreference(ReadPreference.secondaryPreferred())
super.open(parameters)
}
override def flatMap(event: JsonNode, out: Collector[JsonNode]): Unit = {
// Do something here per record and this function can make use of objects initialized via open
userCollection.find(Filters.eq("_id", somevalue)).limit(1).first().subscribe(
(result: Document) => {
// println(result)
},
(t: Throwable) =>{
println(t)
},
()=>{
out.collect(event)
}
)
}
}
}
Basically open function executes once per worker and flatmap executes it per record. The example is for mongo but can be similarly used for cassandra
In your case as I understand the first step of your pipeline is reading data from Cassandra rather than writing a RichFlatMapFunction you should write your own RichSourceFunction
As a reference you can have a look at simple implementation of WikipediaEditsSource.

primaryValues behave not as expected

In our poc, we have a cache in PARTIONED MODE, with 2 backups, and we started 3 nodes. 100 entries were loaded into cache and we did below steps to retrive it.
public void perform () throws GridException {
final GridCache<Long, Entity> cache= g.cache("cache");
GridProjection proj= g.forCache("cache");
Collection< Collection<Entity>> list= proj .compute().broadcast(
new GridCallable< Collection<Entity>>() {
#Override public Collection<Entity> call() throws Exception {
Collection<Entity> values= cache.primaryValues();
System.out.println("List size on each Node: "+ values.size());
// console from each node shows 28,38,34 respectively, which is correct
return values;
}
}).get();
for (Collection<Entity> e: list){
System.out.println("list size when arrives on main Node :"+ e.size());
//console shows 28 for three times, which is not correct
}
}
I assume that primaryValues() is to take value of each element returned by primaryEntrySet() out and put into a Collection. I also tried to use primaryEntrySet and it works without such problem.
The way GridGain serializes cache collections is by reference which may not be very intuitive. I have filed a Jira issue with Apache Ignite project (which is the next version of GridGain open source edition): https://issues.apache.org/jira/browse/IGNITE-38
In the mean time, please try the following from your GridCallable, which should work:
return new ArrayList(cache.primaryValues());

Resources