Failed to execute Cassandra CQL statement, while reading from Ignite Cache - apache-spark

I am trying to integrate ignite with cassandra. I set up the configuration and started the ignite node. But I can not insert/read data from Ignite cache/cassandra db. I created Keyspace and table in the cassandra. And inserted some values. But when tried to read the values , the exception arises. Same thing happened when I tried to insert some values.
My Ignite version is 2.6 and cqlsh 5.0.1 | Cassandra 3.11.4 | CQL spec 3.4.4 | spark version is 2.3.0 | scala version is 2.11.8 | cassandra driver core 3.0.0 | guava 19.0 |
This is ignite configuration.
config3.xml
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<!-- Cassandra connection settings -->
<import resource="file:/etc/ignite/config/cassandra_config/connection-settings3.xml"/>
<!-- Persistence settings for 'cache1' -->
<bean id="cache1_persistence_settings" class="org.apache.ignite.cache.store.cassandra.persistence.KeyValuePersistenceSettings">
<constructor-arg type="org.springframework.core.io.Resource" value="file:/etc/ignite/config/cassandra_config/persistence-settings-3.xml" />
</bean>
<!-- Ignite configuration -->
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<!-- Enabling ODBC. -->
<property name="odbcConfiguration">
<bean class="org.apache.ignite.configuration.OdbcConfiguration"/>
</property>
<property name="cacheConfiguration">
<list>
<!-- Configuring persistence for "cache1" cache -->
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="cache1"/>
<property name="readThrough" value="true"/>
<property name="writeThrough" value="true"/>
<property name="cacheStoreFactory">
<bean class="org.apache.ignite.cache.store.cassandra.CassandraCacheStoreFactory">
<property name="dataSourceBean" value="cassandraRegularDataSource"/>
<property name="persistenceSettingsBean" value="cache1_persistence_settings"/>
</bean>
</property>
<!-- Query fields configuration -->
<property name="queryEntities">
<list>
<bean class="org.apache.ignite.cache.QueryEntity">
<property name="keyType" value="java.lang.String"/>
<property name="valueType" value="java.lang.Integer"/>
</bean>
</list>
</property>
<!-- Query fields configuration END -->
</bean>
</list>
</property>
<!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<!--
Ignite provides several options for automatic discovery that can be used
instead os static IP based discovery. For information on all options refer
to our documentation: http://apacheignite.readme.io/docs/cluster-config
-->
<!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
<!--<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">-->
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
<property name="addresses">
<list>
<!-- In distributed environment, replace with actual host IP address. -->
<value>127.0.0.1:47500..47509</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
</bean>
</beans>
connection-settings3.xml
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="loadBalancingPolicy" class="com.datastax.driver.core.policies.TokenAwarePolicy">
<constructor-arg type="com.datastax.driver.core.policies.LoadBalancingPolicy">
<bean class="com.datastax.driver.core.policies.RoundRobinPolicy"/>
</constructor-arg>
</bean>
<bean id="cassandraRegularDataSource" class="org.apache.ignite.cache.store.cassandra.datasource.DataSource">
<property name="readConsistency" value="QUORUM"/>
<property name="writeConsistency" value="QUORUM"/>
<property name="loadBalancingPolicy" ref="loadBalancingPolicy"/>
<property name="contactPoints">
<list>
<value>127.0.0.1</value>
</list>
</property>
</bean>
</beans>
persistence-settings-3.xml
<persistence keyspace="ignite" table="cache_test" ttl="86400">
<keyspaceOptions>
REPLICATION = {'class' : 'SimpleStrategy', 'replication_factor' : 3}
AND DURABLE_WRITES = true
</keyspaceOptions>
<tableOptions>
comment = 'Cache test'
AND read_repair_chance = 0.2
</tableOptions>
<!--<keyPersistence class="java.lang.String" strategy="PRIMITIVE" column="key" />
<valuePersistence class="java.lang.Integer" strategy="PRIMITIVE" column="value" />-->
<keyPersistence class="java.lang.Integer" strategy="PRIMITIVE" column="key"/>
<valuePersistence class="java.lang.String" strategy="PRIMITIVE" column="value" />
</persistence>
The error message which I got is,
ERROR CassandraCacheStore:586 - Failed to execute Cassandra CQL statement: select "value" from "ignite"."cache_test" where "key"=?;
class org.apache.ignite.IgniteException: Failed to execute Cassandra CQL statement: select "value" from "ignite"."cache_test" where "key"=?;
at org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.execute(CassandraSessionImpl.java:167)
at org.apache.ignite.cache.store.cassandra.CassandraCacheStore.load(CassandraCacheStore.java:189)
at org.apache.ignite.internal.processors.cache.CacheStoreBalancingWrapper.load(CacheStoreBalancingWrapper.java:98)
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadFromStore(GridCacheStoreManagerAdapter.java:327)
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.load(GridCacheStoreManagerAdapter.java:293)
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadAllFromStore(GridCacheStoreManagerAdapter.java:434)
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadAll(GridCacheStoreManagerAdapter.java:400)
at org.apache.ignite.internal.processors.cache.GridCacheAdapter$18.call(GridCacheAdapter.java:2046)
at org.apache.ignite.internal.processors.cache.GridCacheAdapter$18.call(GridCacheAdapter.java:2044)
at org.apache.ignite.internal.processors.cache.GridCacheContext$3.call(GridCacheContext.java:1435)
at org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6695)
at org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:967)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: class org.apache.ignite.IgniteException: Failed to prepare Cassandra CQL statement: select "value" from "ignite"."cache_test" where "key"=?;
at org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.prepareStatement(CassandraSessionImpl.java:621)
at org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.execute(CassandraSessionImpl.java:137)
... 15 more
Caused by: class org.apache.ignite.IgniteException: Failed to establish session with Cassandra database
at org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.session(CassandraSessionImpl.java:555)
at org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.prepareStatement(CassandraSessionImpl.java:603)
... 16 more
Caused by: java.lang.NoSuchMethodError: com.google.common.util.concurrent.Futures.withFallback(Lcom/google/common/util/concurrent/ListenableFuture;Lcom/google/common/util/concurrent/FutureFallback;Ljava/util/concurrent/Executor;)Lcom/google/common/util/concurrent/ListenableFuture;
To start spark-shell Am using the following command
bin/spark-shell --conf "sparkor.extraClassPath=/etc/ignite/libs/*:/etc/ignite/libs/optional/ignite-spark/*:/etc/ignite/libs/optional/ignite-log4j/*:/etc/ignite/libs/optional/ignite-yarn/*:/etc/ignite/libs/ignite-spring/*:/etc/ignite/libs/ignite-cassandra-store/*"
--conf "spark.driver.extraClassPath=/etc/ignite/libs/*:/etc/ignite/libs/optional/ignite-spark/*:/etc/ignite/libs/optional/ignite-log4j/*:/etc/ignite/libs/optional/ignite-yarn/*:/etc/ignite/libs/ignite-spring/*:/etc/ignite/libs/ignite-cassandra-store/*"
--packages org.apache.ignite:ignite-spark:2.3.0
--repositories http://repo.maven.apache.org/maven2/org/apache/ignite

Caused by: java.lang.NoSuchMethodError: com.google.common.util.concurrent.Futures.withFallback(Lcom/google/common/util/concurrent/ListenableFuture;Lcom/google/common/util/concurrent/FutureFallback;Ljava/util/concurrent/Executor;)Lcom/google/common/util/concurrent/ListenableFuture;
This looks like you've got a JAR hell, i.e., your Cassandra version is not compatible with your google common util version. What build/dependency system are you using?

I solved this issue by changing the Guava version. I added guava 19.0 to $IGNITE_HOME/libs/ folder.

Related

How can I fix CrashBackLoopOff problem on AKS for Apache Ignite?

I try to build Apache Ignite on Azure Kubernetes Service. AKS version is 1.19.
I followed the below instructions from the official apache page.
Microsoft Azure Kubernetes Service Deployment
But when I check the status of my pods, they seem failed. The status of pods is CrashLoopBackOff.
When I check the logs, It says the problem is node-configuration.xml
Here is the XML of the node configuration.
<bean class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
<constructor-arg>
<bean class="org.apache.ignite.kubernetes.configuration.KubernetesConnectionConfiguration">
<property name="namespace" value="default" />
<property name="serviceName" value="ignite" />
</bean>
</constructor-arg>
</bean>
</property>
</bean>
</property>
</bean>
Also, here is the output of the log.
PS C:\Users\kaan.akyalcin\ignite> kubectl logs ignite-cluster-755f6665c8-djdcn -n ignite
class org.apache.ignite.IgniteException: **Failed to instantiate Spring XML application context [springUrl=file:/ignite/config/node-configuration.xml, err=Line 1 in XML document from URL** [file:/ignite/config/node-configuration.xml] is invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 71; cvc-elt.1: Cannot find the declaration of element 'bean'.]
at org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:1098)
at org.apache.ignite.Ignition.start(Ignition.java:356)
at org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:367)
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to instantiate Spring XML application context [springUrl=file:/ignite/config/node-configuration.xml, err=Line 1 in XML document from URL [file:/ignite/config/node-configuration.xml] is invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 71; cvc-elt.1: Cannot find the declaration of element 'bean'.]
at org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.applicationContext(IgniteSpringHelperImpl.java:392)
at org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.loadConfigurations(IgniteSpringHelperImpl.java:104)
at org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.loadConfigurations(IgniteSpringHelperImpl.java:98)
at org.apache.ignite.internal.IgnitionEx.loadConfigurations(IgnitionEx.java:736)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:937)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:846)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:716)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:685)
at org.apache.ignite.Ignition.start(Ignition.java:353)
I didn't find the solution. How can I fix this problem?
You have to pass a valid XML file, looks like the docs need to be adjusted accordingly:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
<property name="namespace" value="degault"/>
<property name="serviceName" value="ignite"/>
</bean>
</property>
</bean>
</property>
</bean>
</beans>

Ignite connection failing with Spark Structured streaming

I'm trying to integrate ignite to my spark structured streaming application.
Following are the dependencies I included for ignite
- ignite-core,
- ignite-spark,
- ignite-spring,
- cache-api
Spark config for ignite
private static final String CONFIG = "C:\\apache-ignite-2.7.6-bin\\config\\default-config.xml";
spark.read().format(IgniteDataFrameSettings.FORMAT_IGNITE())
.option(IgniteDataFrameSettings.OPTION_CONFIG_FILE(), CONFIG)
.option(IgniteDataFrameSettings.OPTION_TABLE(), "person")
.load();
Totally clueless on how to resolve the below error, as the config is the default config provided in the binary. All the dependencies are included.
- Ignite 2.7.6
- Spark 2.4.0
ERROR StpMain: Error
class org.apache.ignite.IgniteException: Failed to instantiate Spring XML application context [springUrl=file:/C:/apache-ignite-2.7.6-bin/config/default-config.xml, err=Line 26 in XML document from URL [file:/C:/apache-ignite-2.7.6-bin/config/default-config.xml] is invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 26; columnNumber: 72; cvc-elt.1: Cannot find the declaration of element 'beans'.]
default-config.xml
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="cacheConfiguration">
<!-- SharedRDD cache example configuration (Atomic mode). -->
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<!-- Set a cache name. -->
<property name="name" value="sharedRDD"/>
<!-- Set a cache mode. -->
<property name="cacheMode" value="PARTITIONED"/>
<!-- Index Integer pairs used in the example. -->
<property name="indexedTypes">
<list>
<value>java.lang.Integer</value>
<value>java.lang.Integer</value>
</list>
</property>
<!-- Set atomicity mode. -->
<property name="atomicityMode" value="ATOMIC"/>
<!-- Configure a number of backups. -->
<property name="backups" value="1"/>
</bean>
</property>
<!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<!--
Ignite provides several options for automatic discovery that can be used
instead os static IP based discovery. For information on all options refer
to our documentation: http://apacheignite.readme.io/docs/cluster-config
-->
<!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
<!--<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">-->
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
<property name="addresses">
<list>
<!-- In distributed environment, replace with actual host IP address. -->
<value>localhost:10800</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
</bean>
</beans>
Should also comment that the Spark 2.4 is not supported yet.
Here is the ticket on Apache Ignite JIRA, you could track the status here https://issues.apache.org/jira/browse/IGNITE-12054
Please take a look at this thread. Looks like person there had the same issue:
Cannot find the declaration of element 'beans'
I guess that you should add next to your XML:
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-**[MAYOR.MINOR]**.xsd"
Where [MAYOR.MINOR] is a version of your Spring e.g 3.1, 4.0, etc.

How to set expiry policy for ignite Cache [Inserting data using SPARK Dataframe]

I am trying to set Expiry Policy using xml config file:
<?xml version="1.0" encoding="UTF-8"?>
<!--
Ignite configuration with all defaults and enabled p2p deployment and enabled events.
-->
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:util="http://www.springframework.org/schema/util"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/util
http://www.springframework.org/schema/util/spring-util.xsd">
<bean abstract="true" id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<!-- Set to true to enable distributed class loading for examples, default is false. -->
<property name="peerClassLoadingEnabled" value="true"/>
<!-- Enable task execution events for examples. -->
<property name="includeEventTypes">
<list>
<!--Task execution events-->
<util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_STARTED"/>
<util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FINISHED"/>
<util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FAILED"/>
<util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_TIMEDOUT"/>
<util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_SESSION_ATTR_SET"/>
<util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_REDUCED"/>
<!--Cache events-->
<util:constant static-field="org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_PUT"/>
<util:constant static-field="org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_READ"/>
<util:constant static-field="org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_REMOVED"/>
</list>
</property>
<!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<!--
Ignite provides several options for automatic discovery that can be used
instead os static IP based discovery. For information on all options refer
to our documentation: http://apacheignite.readme.io/docs/cluster-config
-->
<!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
<!--<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">-->
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
<property name="addresses">
<list>
<!-- In distributed environment, replace with actual host IP address. -->
<value>127.0.0.1:47500..47509</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
</bean>
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="setExpiryPolicyFactory" >
<bean class="javax.cache.expiry.CreatedExpiryPolicy">
<!---- Remove all data after 10 Seconds---->
<property name="expiryDuration" value="10000"/>
</bean>
</property>
<property name="eagerTtl" value="true"/>
</bean>
I am a bit new for Ignite. I walk through documentation but unable to find. Is any way to setExpiryPolicy into XMl?
Any help would be appreciated!
That requires a bit of Spring XML magic because there is no easy-to-use factory class for the CreatedExpiryPolicy (i.e. no CreatedExpiryPolicyFactory that you can just instantiate like a normal bean), but there is the CreatedExpiryPolicy::factoryOf that can be passed as factory-method:
<property name="expiryPolicyFactory">
<bean class="javax.cache.expiry.CreatedExpiryPolicy" factory-method="factoryOf">
<constructor-arg>
<bean class="javax.cache.expiry.Duration">
<constructor-arg value="SECONDS"/>
<constructor-arg value="10"/>
</bean>
</constructor-arg>
</bean>
</property>
Apache Ignite doesn`t support expiry policies and TTL for SQL tables.
An alternative would be to create cache with dynamically generated names based on desired time window.
For example you could append hour of the day if you need it hourly:
MY_CACHE_00
MY_CACHE_01
MY_CACHE_02
etc...
MY_CACHE_23
Then make your software write/read to the latest hour CACHE, and clear the oldest.

Integrating Apache Cassandra with Apache Ignite

I'm trying to integrate Apache Ignite with Apache Cassandra(3.11.2) as I want to use Ignite to cache the data present in my already existing Cassandra database.
After going through the online resources, I've done the following till now:
Downloaded Apache Ignite.
Copied all the folders present in "libs/optional/" to "libs/"(I don't know which ones will be required for Cassandra).
Created 3 xmls in the config folder i.e. "cassandra-config.xml", "connection-settings.xml" and "persistance-settings.xml". Currently I'm using the same node(172.16.129.68) for both Cassandra and Ignite.
cassandra-config.xml
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<!-- Cassandra connection settings -->
<import resource="connection-settings.xml" />
<!-- Persistence settings for 'cache1' -->
<bean id="cache1_persistence_settings" class="org.apache.ignite.cache.store.cassandra.persistence.KeyValuePersistenceSettings">
<constructor-arg type="org.springframework.core.io.Resource" value="file:/home/cass/apache_ignite/apache-ignite-fabric-2.4.0-bin/config/persistance-settings.xml" />
</bean>
<!-- Ignite configuration -->
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="cacheConfiguration">
<list>
<!-- Configuring persistence for "cache1" cache -->
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="cache1"/>
<property name="readThrough" value="true"/>
<property name="writeThrough" value="true"/>
<property name="writeBehindEnabled" value="true"/>
<property name="cacheStoreFactory">
<bean class="org.apache.ignite.cache.store.cassandra.CassandraCacheStoreFactory">
<property name="dataSourceBean" value="cassandraAdminDataSource"/>
<property name="persistenceSettingsBean" value="cache1_persistence_settings"/>
</bean>
</property>
</bean>
</list>
</property>
<!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<!--
Ignite provides several options for automatic discovery that can be used
instead os static IP based discovery. For information on all options refer
to our documentation: http://apacheignite.readme.io/docs/cluster-config
-->
<!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
<!--<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">-->
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
<property name="addresses">
<list>
<!-- In distributed environment, replace with actual host IP address. -->
<value>172.16.129.68:47500..47509</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
</bean>
connection-settings.xml
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="loadBalancingPolicy" class="com.datastax.driver.core.policies.TokenAwarePolicy">
<constructor-arg type="com.datastax.driver.core.policies.LoadBalancingPolicy">
<bean class="com.datastax.driver.core.policies.RoundRobinPolicy"/>
</constructor-arg>
</bean>
<bean id="cassandraAdminDataSource" class="org.apache.ignite.cache.store.cassandra.datasource.DataSource">
<property name="port" value="9042"/>
<property name="contactPoints" value="172.16.129.68"/>
<property name="readConsistency" value="ONE"/>
<property name="writeConsistency" value="ONE"/>
<property name="loadBalancingPolicy" ref="loadBalancingPolicy"/>
</bean>
persistance-settings.xml
<persistence keyspace="test" table="epc_table">
<keyPersistence class="java.lang.String" strategy="PRIMITIVE" column="imsi"/>
<valuePersistence strategy="BLOB"/>
</persistence>
I run the following command to start Ignite from bin folder.
ignite.sh ../config/cassandra-config.xml
Now, I want to take a look at the cassandra table via sqlline. I've tried the following:
./sqlline.sh -u jdbc:cassandra://172.16.129.68:9042/test //(test is the name of the keyspace)
I get the following output:
No known driver to handle "jdbc:cassandra://172.16.129.68:9042/test". Searching for known drivers...
java.lang.NullPointerException
sqlline version 1.3.0
0: jdbc:cassandra://172.16.129.68:9042/test>
I've also tried:
./sqlline.sh -u jdbc:ignite:thin://172.16.129.68
but when I use "!tables", I'm not able to see any table.
What exactly has been missing? How to access/modify the tables present in Cassandra using sqlline?
Operating System: RHEL 6.5
Apache Ignite is a key-value database and there are no tables created by default that you are able to view with JDBC connector. CacheStore is a way to integrate Ignite with external DB or any other storage, and it loads data as a key-value pair.
In your config you said Ignite that you want to store and load entries in/from Cassandra, but still Ignite doesn't know entries structure (BTW Ignite really doesn't care what objects were putted into it).
To be able to list tables and do queries on it, you need to create tables. For that you need to have ignite-indexing in /lib directory and set QueryEntity or indexed types if you have annotated POJOs. Here is example of such configuration:
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="mycache"/>
<!-- Configure query entities -->
<property name="queryEntities">
<list>
<bean class="org.apache.ignite.cache.QueryEntity">
<property name="keyType" value="java.lang.Long"/>
<property name="valueType" value="org.apache.ignite.examples.Person"/>
<property name="fields">
<map>
<entry key="id" value="java.lang.Long"/>
<entry key="orgId" value="java.lang.Long"/>
<entry key="firstName" value="java.lang.String"/>
<entry key="lastName" value="java.lang.String"/>
<entry key="resume" value="java.lang.String"/>
<entry key="salary" value="java.lang.Double"/>
</map>
</property>
<property name="indexes">
<list>
<bean class="org.apache.ignite.cache.QueryIndex">
<constructor-arg value="id"/>
</bean>
<bean class="org.apache.ignite.cache.QueryIndex">
<constructor-arg value="orgId"/>
</bean>
<bean class="org.apache.ignite.cache.QueryIndex">
<constructor-arg value="salary"/>
</bean>
</list>
</property>
</bean>
</list>
</property>
If you configure that, you'll get an ability to enlist and query that tables over SQLine. (Please note, that you cannot query data that are not loaded into Ignite. To load them, you may use IgniteCache.get() with enabled readThrough option or IgniteCache.loadCache() to load everything from Cassandra table).
To query Cassandra with JDBC, you need a JDBC driver for it, try, for example DBSchema.

Need help on starting the Ignite cache in 2 node cluster

I am novice in Ignite and trying to utilize Ignite to setup in-memory cache. I did some basic configuration and started the Ignite based pluggable persistence work on single node. Now, I am planning to test the performance on 2 node cluster and setting up the ignite configuration as per below:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="countryCacheStoreFactory" class="javax.cache.configuration.FactoryBuilder" factory-method="factoryOf">
<constructor-arg><value>com.xyz.exploreignite.cache.CustomCacheStore</value></constructor-arg>
</bean>
<bean id="stateCacheStoreFactory" class="javax.cache.configuration.FactoryBuilder" factory-method="factoryOf">
<constructor-arg><value>com.xyz.exploreignite.cache.CustomStateCacheStore</value></constructor-arg>
</bean>
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="peerClassLoadingEnabled" value="false"/>
<property name="clientMode" value="true"/>
<property name="gridName" value="clusterGrid"/>
<property name="cacheConfiguration">
<list>
<!-- Partitioned cache example configuration (Atomic mode). -->
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="atomicityMode" value="ATOMIC"/>
<property name="backups" value="1"/>
<property name="name" value="customCountryCache"/>
<property name="readThrough" value="true"/>
<property name="writeThrough" value="true"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="writeBehindEnabled" value="true"/>
<property name="copyOnRead" value="true"/>
<property name="memoryMode" value="OFFHEAP_TIERED"/>
<property name="atomicWriteOrderMode" value="PRIMARY"/>
<property name="indexedTypes" >
<list>
<value>java.lang.Integer</value>
<value>com.xyz.exploreignite.pojo.Country</value>
</list>
</property>
<!-- Cache store. -->
<property name="cacheStoreFactory" ref="countryCacheStoreFactory"/>
</bean>
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="atomicityMode" value="ATOMIC"/>
<property name="backups" value="1"/>
<property name="name" value="customStateCache"/>
<property name="readThrough" value="true"/>
<property name="writeThrough" value="true"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="writeBehindEnabled" value="true"/>
<property name="copyOnRead" value="true"/>
<property name="memoryMode" value="OFFHEAP_TIERED"/>
<property name="atomicWriteOrderMode" value="PRIMARY"/>
<property name="indexedTypes" >
<list>
<value>java.lang.Integer</value>
<value>com.xyz.exploreignite.pojo.State</value>
</list>
</property>
<!-- Cache store. -->
<property name="cacheStoreFactory" ref="stateCacheStoreFactory"/>
</bean>
</list>
</property>
<!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<!--
Ignite provides several options for automatic discovery that can be used
instead os static IP based discovery. For information on all options refer
to our documentation: http://apacheignite.readme.io/docs/cluster-config
-->
<!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<!--<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">-->
<property name="addresses">
<list>
<value>127.0.0.1:47500..47509</value>
<value>172.26.49.1:47500..47509</value>
<!-- In distributed environment, replace with actual host IP address. -->
<value>172.26.49.2:47500..47509</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
</bean>
Now, while starting ignite with "bin/ignite.sh " on both the nodes it shows failed to connect to server. While running only "bin/ignite.sh" in parallel to above one, both the individual ignite config instance starts in standalone mode with only 1 client. I need to have both of them utilizing the shared instance. Please suggest possible issues in my deployment/execution.
Try to remove <value>127.0.0.1:47500..47509</value> line from the discovery configuration. It doesn't make much for a distributed cluster.
That configuration is in <property name="clientMode" value="true"/> You need to have at least one node set to <property name="clientMode" value="false"/> or else your clients won't have any server nodes to connect to.

Resources