Does Hazelcast honor a default cache configuration - hazelcast

In the hazelcast documentation there are a few brief references to a cache named "default" - for instance, here:
http://docs.hazelcast.org/docs/3.6/manual/html-single/index.html#jcache-declarative-configuration
Later, there is another mention of cache default configuration here:
http://docs.hazelcast.org/docs/3.6/manual/html-single/index.html#icache-configuration
What I would like is to be able to configure "default" settings that are inherited when caches are created. For instance, given the following configuration snippet:
<cache name="default">
<statistics-enabled>true</statistics-enabled>
<management-enabled>true</management-enabled>
<expiry-policy-factory>
<timed-expiry-policy-factory expiry-policy-type="ACCESSED" time-unit="MINUTES" duration-amount="2"/>
</expiry-policy-factory>
</cache>
I'd like for the following test to pass:
#Test
public void defaultCacheSettingsTest() throws Exception {
CacheManager cacheManager = underTest.get();
Cache cache = cacheManager.createCache("foo", new MutableConfiguration<>());
CompleteConfiguration cacheConfig = (CompleteConfiguration) cache.getConfiguration(CompleteConfiguration.class);
assertThat(cacheConfig.isManagementEnabled(), is(true));
assertThat(cacheConfig.isStatisticsEnabled(), is(true));
assertThat(cacheConfig.getExpiryPolicyFactory(),
is(AccessedExpiryPolicy.factoryOf(new Duration(TimeUnit.MINUTES, 2l)))
);
}
Ehcache has a "templating" mechanism and I am hoping that I can get a similar behavior.

Hazelcast supports configuration with wildcards. You can use <cache name="*"> for all Caches to share the same configuration, or apply other patterns to group Cache configurations as you wish.
Note that since you already use Hazelcast declarative configuration to configure your Caches, you should use CacheManager.getCache instead of createCache to obtain the Cache instance: Caches created with CacheManager.createCache(..., Configuration) disregard the declarative configuration since they are configured explicitly with the Configuration passed as argument.

Related

Not able to connect to Cassandra from my spring boot application , throwing exception as couldn't reach any contact-point [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 20 days ago.
Improve this question
added config file to take contact-point programatically
#Bean(destroyMethod = "close")
public CqlSession session() {
CqlSession session = CqlSession.builder()
.addContactPoint(InetSocketAddress.createUnresolved("[240b:c0e0:1xx:xxx8:xxxx:x:x:x]", port))
.withConfigLoader(
DriverConfigLoader.programmaticBuilder()
.withString(DefaultDriverOption.LOAD_BALANCING_LOCAL_DATACENTER, localDatacenter) .withString(DefaultDriverOption.AUTH_PROVIDER_USER_NAME,username)
.withString(DefaultDriverOption.AUTH_PROVIDER_PASSWORD,password)
.withString(DefaultDriverOption.CONNECTION_INIT_QUERY_TIMEOUT,"10s")
.withString(DefaultDriverOption.CONNECTION_CONNECT_TIMEOUT, "20s")
.withString(DefaultDriverOption.REQUEST_TIMEOUT, "20s")
.withString(DefaultDriverOption.CONTROL_CONNECTION_TIMEOUT, "20s")
.withString(DefaultDriverOption.SESSION_KEYSPACE,keyspace)
.build())
//.addContactPoint(InetSocketAddress.createUnresolved(InetAddress.getByName(contactPoints).getHostName(), port))
.build();
}
return session;
`Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [com.datastax.oss.driver.api.core.CqlSession]: Factory method 'cassandraSession' threw exception with message: Since you provided explicit contact points, the local DC must be explicitly set (see basic.load-balancing-policy.local-datacenter in the config, or set it programmatically with SessionBuilder.withLocalDatacenter). Current contact points are: Node(endPoint=/127.0.0.1:9042, hostId=0323221f-9a0f-ec92-ea4a-c1472c2a8b94, hashCode=39075b16)=datacenter1. Current DCs in this cluster are: datacenter1
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:171) ~[spring-beans-6.0.2.jar:6.0.2]
at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:648) ~[spring-beans-6.0.2.jar:6.0.2]
... 89 common frames omitted
Caused by: java.lang.IllegalStateException: Since you provided explicit contact points, the local DC must be explicitly set (see basic.load-balancing-policy.local-datacenter in the config, or set it programmatically with SessionBuilder.withLocalDatacenter). Current contact points are: Node(endPoint=/127.0.0.1:9042, hostId=0323221f-9a0f-ec92-ea4a-c1472c2a8b94, hashCode=39075b16)=datacenter1. Current DCs in this cluster are: datacenter1
at com.datastax.oss.driver.internal.core.loadbalancing.helper.MandatoryLocalDcHelper.discoverLocalDc(MandatoryLocalDcHelper.java:91) ~[java-driver-core-4.11.4-yb-1-RC1.jar:na]
at com.datastax.oss.driver.internal.core.loadbalancing.DefaultLoadBalancingPolicy.discoverLocalDc(DefaultLoadBalancingPolicy.java:119) ~[java-driver-core-4.11.4-yb-1-RC1.jar:na]
at
this is the application.yml file
spring:
data:
cassandra:
keyspace-name: xxx
contact-points: [xxxx:xxxx:xxxx:xxx:xxx:xxx]
port: xxx
local-datacenter: xxxx
use-dc-aware: true
username: xxxxx
password: xxxxx
ssl: true
SchemaAction: CREATE_IF_NOT_EXISTS
but still the application is pointing towards the localhost , even though i've explicitly mentioned the contact-points and localDC
logs of stg evn are :
caused by: com.datastax.oss.driver.api.core.AllNodesFailedException: Could not reach any contact point, make sure you've provided valid addresses (showing first 1 nodes, use getAllErrors() for more): Node (endPoint=/[240b:cOe0:102:xxxx:xxxx:x:x:x]:3xxx,hostId-null,hashCode=4e9ba6a8):[com.datastax.oss.driver.api.core.connection.ConnectionInitException:[s0|controllid:0x984419ed,L:/[240b:cOe0:102:5dd7:
xxxx:x:x:xxx]:4xxx - R:/[240b:c0e0:102:xxxx:xxxx:x:x:x]:3xxx] Protocol initialization request,
step 1 (OPTIONS: unexpected tarlure com.datastax.oss.driver.apt.core.connection.closedconnectiontxception:
Lost connection to remote peer)]
thanks for the question!
I'll try to provide some pointers that might help you identify the problem but it should be noted that you appear to have some non-standard elements to your application. Specifically I note that "java-driver-core-4.11.4-yb-1-RC1.jar" isn't a Java driver artifact released by DataStax (there isn't even a 4.11.4 Java driver release). This could be relevant for reasons we'll get into in a moment. I also don't recognize the configuration file you cite above. Could you provide some more detail on how your app is configured? At first blush it looked as though you might be using spring-data-cassandra but there wasn't any mention of it in your stack trace... so perhaps you're using some kind of custom configuration code?
As to your specific question: my guess is that you might have a Java driver configuration file in your staging environment which is providing a default value for "datastax-java-driver.basic.contact-points". The 4.x Java driver is configured via the Lightbend Config library. Most relevant to our case it searches for a set of configuration files with various default names on the classpath; these files are then merged together to generate the config passed to the driver. So if you have an application.conf in staging which specifies some contact points and is in the classpath the code you cite above would run fine in your local environment but fail in staging.
To validate this, create an application.conf file in your local environment in src/main/resources (or somewhere else that's explicitly included in the classpath) and give it the following contents:
datastax-java-driver {
basic {
contact-points: ["127.0.0.1:9042"]
}
}
If you then re-run the app in your local environment you should see the error there as well.
Note that the core Java driver JAR already includes a reference.conf file which serves as a default configuration. Here's where the part about a custom JAR figures in; because your not using a standard DataStax Java driver JAR I don't know if you're using the standard reference.conf file defined within that JAR. It's possible that the contact points are defined in that file, although if that were the case I'd expect you to already be seeing the error in any environment where you use that JAR.
One final note: the Java driver should be fine with IPv6 addresses. The issue described above isn't related to IPv6; it's entirely a function of how you're using the Java driver's configuration mechanism.
Hopefully some of the above is helpful!

What load balancing policies are available in Cassandra Java driver 4.x?

We are upgrading datastax Cassandra java driver from 3.2 to 4.x to support DSE 6.8.
Load balancing policies our application currently supports are RoundRobinPolicy and DCAwareRoundRobinPolicy.
These policies aren't available in java-driver-core 4.12.
How can we support the above policies.Please help..
Current code in our application using cassandra-driver-core-3.1.0.jar:
public static LoadBalancingPolicy getLoadBalancingPolicy(String loadBalanceStr, boolean isTokenAware) {
LoadBalancingPolicy loadBalance = null;
if (isTokenAware) {
loadBalance = new TokenAwarePolicy(loadBalanceDataConvert(loadBalanceStr));
} else {
loadBalance = loadBalanceDataConvert(loadBalanceStr);
}
return loadBalance;
}
private static LoadBalancingPolicy loadBalanceDataConvert(String loadBalanceStr) {
if (CassandraConstants.CASSANDRACONNECTION_LOADBALANCEPOLICY_DC.equals(loadBalanceStr)) {
return new DCAwareRoundRobinPolicy.Builder().build();
} else if (CassandraConstants.CASSANDRACONNECTION_LOADBALANCEPOLICY_ROUND.equals(loadBalanceStr)) {
return new RoundRobinPolicy();
}
return null;
}
The load balancing has been heavily simplified in version 4.x of the Cassandra Java driver. You no longer need to nest multiple policies within each other to achieve high availability.
In our opinion, the best policy is the DefaultLoadBalancingPolicy which is enabled by default and achieves all the best attributes as the policies in older versions.
The DefaultLoadBalancingPolicy generates a query plan that is token-aware by default so replicas which own the data appear first and prioritised over other nodes in the local DC. For token-awareness to work, you must provide routing information either by keyspace (with getRoutingKeyspace()), or by routing key (with getRoutingKey()).
If routing information is not provided, the DefaultLoadBalancingPolicy generates a query plan that is a simple round-robin shuffle of available nodes in the local DC.
We understand that developers who are used to configuring DCAwareRoundRobinPolicy in older versions would like to continue using it but we do not recommend it. It is our opinion that failover should take place at the infrastructure layer, not the application layer.
Our opinion is that the DefaultLoadBalancingPolicy is the right choice in all cases. If you prefer to configure DC-failover, make sure you fully understand the implications and know that we think it is the wrong choice.
For details, see the following documents:
Java driver v4 Upgrade Guide
Load Balancing in Java driver v4

How to configure multiple cassandra contact-points for lagom?

In Lagom it appears the contact point get loaded from the service locator which accepts only a single URI. How can we specify multiple cassandra contact-points?
lagom.services {
cas_native = "tcp://10.0.0.120:9042"
}
I have tried setting just the contact points in the akka persistence config but that doesn't seem to override the service locator config.
All that I was missing was the session provider to override service lookup:
session-provider = akka.persistence.cassandra.ConfigSessionProvider
contact-points = ["10.0.0.120", "10.0.3.114", "10.0.4.168"]
was needed in the lagom cassandra config

Spring Integration Metadatastore

I have added a bean called metadataStore to my spring boot + spring integration application and expected that the ftp synchronization would have been persisted and intact even after a server restart.
Nevertheless, my early tests suggests otherwise; If I start the server and let it pick-up and process 3 tests files and then restart the server, then these same 3 files will be pick-up and processed again - as if no persistent metadataStore was defined at all.
I wonder if I am missing some configuration details when setting up the datastore...
#Configuration
public class MetadataStoreConfiguration {
#Bean
public PropertiesPersistingMetadataStore metadataStore() {
PropertiesPersistingMetadataStore metadataStore = new PropertiesPersistingMetadataStore();
return metadataStore;
}
}
Also, I see in the spring-integration reference manual a short example on how to setup an idempotent receiver and metadata store. Is this what my implementation is lacking?
If that's it and if I have to set this up like in the example, where would I define my metadataStore.get and metadataStore.put calls? the outbound adapter I am using doesn't provide me with an expression attribute... Here is my naive and incomplete attempt at this:
<int-file:inbound-channel-adapter id="inboundLogFile"
auto-create-directory="true"
directory="${sftp.local.dir}"
channel="fileInboundChannel"
prevent-duplicates="true">
<int:poller fixed-rate="${setup.inbound.poller.rate}"/>
</int-file:inbound-channel-adapter>
<int:filter id="fileFilter" input-channel="fileInboundChannel"
output-channel="idempotentServiceChannel"
discard-channel="fileDiscardChannel"
expression="#metadataStore.get(payload.name) == null"/>
This is the outbound adapter used in the example:
<int:outbound-channel-adapter channel="idempotentServiceChannel" expression="#metadataStore.put(payload.name, '')"/>
In my ftp outbound adapter I can't insert the above expression :(
<int-ftp:outbound-channel-adapter id="sdkOutboundFtp"
channel="idempotentServiceChannel"
session-factory="ftpsCachingSessionFactory"
charset="UTF-8"
auto-create-directory="true"
use-temporary-file-name="false"
remote-file-separator="/"
remote-directory-expression="${egnyte.remote.dir}"
* NO EXPRESSION POSSIBLE HERE *
mode="REPLACE">
</int-ftp:outbound-channel-adapter>
By default, the PropertiesPersistingMetadataStore only persists the state on a normal application shutdown; it's kept in memory until then.
In 4.1.2, we changed it to implement Flushable so users can flush the state at any time.
Consider using an FileSystemPersistentAcceptOnceFileListFilter in the local-filter on the inbound adapter instead of a separate filter element. See the documentation for more information.
Starting with version 4.1.5 this filter has an option flushOnUpdate to flush() the metadata store on every update.
Other metadata stores that use an external server (Redis, Mongo, Gemfire) don't need to be flushed.

How do I programmatically disable Hazelcast client's logging?

I used to use networkConfig.setProperty("hazelcast.logging.type", "none") in cluster's network configuration, but I can't see any logging-related configuration methods in any of ClientConfig and ClientNetworkConfig. Please help to save my server's log files.
Hazelcast version: 3.2.5
I think you can do exactly the same thing on the ClientConfig using the ClientConfig.setProperty. Personally I never use it, I prefer to use the commandline option (-Dhazelcast.logging.type=blabla) because this give more predictable logging behavior.
I use Hazelcast with the default JDK logger.
This works for me to save my server's log files:
Logger logger = Logger.getLogger("");
logger.setLevel(Level.WARNING);

Resources