Hazelcast ignored cluster configuration - hazelcast

I have defined "static" hazelcast configuration:
#Bean
public Config getHazelcastConfig() {
final Config config = new Config();
config.setProperty("hazelcast.logging.type", "slf4j");
final GroupConfig groupConfig = new GroupConfig();
groupConfig.setName("projectName");
groupConfig.setPassword("projectPassword");
config.setGroupConfig(groupConfig);
final NetworkConfig networkConfig = new NetworkConfig();
final TcpIpConfig tcpIpConfig = new TcpIpConfig();
final String[] members = "10.0.0.2".split(",");
for (String member : members) {
tcpIpConfig.addMember(member);
}
tcpIpConfig.setConnectionTimeoutSeconds(5);
final JoinConfig joinConfig = networkConfig.getJoin();
joinConfig.getAwsConfig().setEnabled(false);
joinConfig.getMulticastConfig().setEnabled(false);
joinConfig.setTcpIpConfig(tcpIpConfig);
joinConfig.getTcpIpConfig().setEnabled(true);
joinConfig.getTcpIpConfig().setConnectionTimeoutSeconds(5);
config.setNetworkConfig(networkConfig);
config.setInstanceName("projectInstanceName");
return config;
}
Where "10.0.0.2" is my localhost ip. I want only one instance of hazelcast added to my tcpIpConfig members. My friend is sitting in the same network and has IP with number "10.0.0.3". He is lazy to change password and group name from property file shared on git and is connecting to my cluster. Why he is able to connect to my cluster? How I can prevent this?

Yesss, #Sachin. You are right. After adding
securityCfg.setEnabled(true);
to hazelcast configuration passwords and logins are checked.
Second problem I had with multiple hazelcast instances on localhost was related with hibernate. When using hazelcast as second level cache in hibernate there is created hazelcast member. It can be turned on/off by:
properties.setProperty("hibernate.cache.hazelcast.use_native_client", "false");
or
properties.setProperty("hibernate.cache.hazelcast.use_native_client", "true");

Related

Hazelcast seems not forming cluster on server

i try to use hazelcast within my java application (Minecraft-Plugin)! On my computer, all works fine, but on my server the members will not connect to each other. Every member seems to form its own (ego) cluster instead of joining together.
I dont know why and i need help.
Here is my Hazlecast config:
Config config = new Config();
config.setClusterName("core");
NetworkConfig networkConfig = config.getNetworkConfig();
networkConfig.setPort(5900);
config.getSerializationConfig().addDataSerializableFactory(1, new UserSerializerFactory());
MapConfig mapConfig = new MapConfig();
mapConfig.setName("users");
mapConfig.setBackupCount(1);
mapConfig.setTimeToLiveSeconds(30*86400); //30 Tage
config.addMapConfig(mapConfig);
hazelcastMember = Hazelcast.newHazelcastInstance(config);
userMap = hazelcastMember.getMap("users");
It works fine on my pc but not on my server!
Default configuration uses multicast for discovery, and this is frequently blocked on some hardware types.
Try something like this
Config config = new Config();
TcpIpConfig tcpIpConfig = new TcpIpConfig()
.setEnabled(true)
.setMembers(List.of("127.0.0.1:5900"));
JoinConfig joinConfig = new JoinConfig()
.setMulticastConfig(new MulticastConfig().setEnabled(false))
.setTcpIpConfig(tcpIpConfig);
NetworkConfig networkConfig = new NetworkConfig().setJoin(joinConfig);
networkConfig.setPort(5900);
config.setNetworkConfig(networkConfig);
ie. Turn off Multicast, use TCP to the list of specified addresses

Hazlecast eviction is not working with USED_HEAP_SIZE with Hazlecast 5.0

I'm having below map configuration for Hazlecast
private MapConfig initializeDefaultMapConfig(int ttlMinutes,int size) {
MapConfig mapConfig = new MapConfig();
EvictionConfig evictionConfig = new EvictionConfig();
evictionConfig.setEvictionPolicy(EvictionPolicy.LRU);
evictionConfig.setMaxSizePolicy(MaxSizePolicy.USED_HEAP_SIZE);
evictionConfig.setSize(size);
mapConfig.setBackupCount(0);
mapConfig.setEvictionConfig(evictionConfig);
mapConfig.setTimeToLiveSeconds(ttlMinutes);
mapConfig.setMaxIdleSeconds(ttlMinutes);
return mapConfig;
}
I am running Hazlecast in single node instance only. It still exceeds the specified memory size. Please suggest.
Possibly related, this issue in Hazelcast could be the explanation.
See Github issue

How to connect to elasticsearch cluster with multiple nodes installed on Azure ? How do I get elasticsearch endpoint?

I have installed 3 nodes of elastic search from Azure marketplace and any node can act as master node. Now how do I connect to the cluster ? If I had one node, I could simply use its IP with Port(9200) but here I have 3 nodes so how do I get the cluster endpoint ? Thanks
This how I did it and it worked well for me:
public class ElasticsearchConfig {
private Vector<String> hosts;
public void setHosts(String hostString) {
if(hostString == null || hostString.trim().isEmpty()) {
return;
}
String[] hostParts = hostString.split(",");
this.hosts = new Vector<>();
Collections.addAll(this.hosts, hostParts);
}
}
public class ElasticClient {
private final ElasticsearchConfig config;
private RestHighLevelClient client;
public ElasticClient(ElasticsearchConfig config) {
this.config = config;
}
public void start() throws Exception {
HttpHost[] httpHosts = new HttpHost[config.getHosts().size()];
config.getHosts()
.stream()
.map(host -> new HttpHost(host.split(":")[0], Integer.valueOf(host.split(":")[1])))
.collect(Collectors.toList())
.toArray(httpHosts);
client = new RestHighLevelClient(RestClient.builder(httpHosts));
System.out.println("Started ElasticSearch Client");
}
public void stop() throws Exception {
if (client != null) {
client.close();
}
client = null;
}
}
Set the ElasticsearchConfig as below:
ElasticsearchConfig config = new ElasticsearchConfig();
config.setHosts("ip1:port,ip2:port,ip3:port");
If all three nodes are part of same cluster than no need to specify all the nodes and even one single node ip is enough to connect to the cluster.
But above approach has some disadvantage and in small cluster with less workload its fine as in this case only one node which you configure in your elasticsearch client will act as co-ordinating node and can become a hot-spot in your cluster, its better to have all the nodes configured in your client so for every request any one node can act co-ordinating node and if you have huge workload you might also consider to have dedicated co-ordinating nodes for even better performance.
Hope this answer your question and I didn't provide the code snippet as don't know which language and client you are using and in your question I felt code is not issue but you want to understand the concept in detail.
Appreciate everyone's time and all those who replied; Turns out that by default, Azure marketplace copy of self managed ES, sets up only "Internal Load Balancer". I was able to get the cluster end point as soon as I configured "External Load Balancer". All set now.

Cassandra 3 nodes cluster throwing NoHostAvailableException as soon as one node is down

We have a 3 nodes cluster with a RF 3.
As soon as we drain one node from the cluster we see many:
All host(s) tried for query failed (no host was tried)
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (no host was tried)
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:214)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:52)
All our writes and read are with a Consistency Level QUORUM or ONE so with one node down everything should work perfectly. But as long as the node is down, exceptions are thrown.
We use Cassandra 2.2.4 + Java Cassandra Driver 2.1.10.2
Here's how we create our cluster:
new Cluster.Builder()
.addContactPoints(CONTACT_POINTS)
.withCredentials(USERNAME, PASSWORD)
.withRetryPolicy(new LoggingRetryPolicy(DefaultRetryPolicy.INSTANCE))
.withReconnectionPolicy(new ExponentialReconnectionPolicy(10, 10000))
.withLoadBalancingPolicy(new TokenAwarePolicy(new RoundRobinPolicy()))
.withSocketOptions(new SocketOptions().setReadTimeoutMillis(12_000))
.build();
CONTACT_POINTS is a String array of the 3 public ips of the nodes.
A few months ago, the cluster was working fine with temporarily only 2 nodes but for an unknown reason it's not the case anymore and I'm running out of ideas :(
Thanks a lot for your help!
Problem solved.
More analysis showed that the issue was coming from an IP problem. Our cassandra servers use private local IPs (10.0.) to communicate together while our app servers have their public IPs in their config.
When they were in the same network it was working properly but as they moved to a different network, they were able to connect to only one machine of the cluster and the other two were considered as down as they were trying to connect to the private local IPs instead of the public one for the other two.
The solution was to add an IPTranslater in the cluster builder:
.withAddressTranslater(new ToPublicIpAddressTranslater())
With the following code:
private static class ToPublicIpAddressTranslater implements AddressTranslater {
private Map<String, String> internalToPublicIpMap = new HashMap<>();
public ToPublicIpAddressTranslater() {
for (int i = 0; i < CONTACT_POINT_PRIVATE_IPS.length; i++) {
internalToPublicIpMap.put(CONTACT_POINT_PRIVATE_IPS[i], CONTACT_POINTS[i]);
}
}
#Override
public InetSocketAddress translate(InetSocketAddress address) {
String publicIp = internalToPublicIpMap.get(address.getHostString());
if (publicIp != null) {
return new InetSocketAddress(publicIp, address.getPort());
}
return address;
}
}

Correct usage of AddressResolver interface

I was wondering if there is an example usage of the AddressResolver interface in apache ignite.
I was trying to 'bind' my local IP addresses (e.g. 192.168.10.101) to my external IP address using the AddressResolver interface, but without luck.
When I do that the Ignite server just hangs (no output from the debug either)
My code for starting the server is:
TcpDiscoverySpi spi = new TcpDiscoverySpi();
TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
ipFinder.setAddresses(ipaddresses);
spi.setIpFinder(ipFinder);
spi.setAddressResolver(new IotAddressResolver());
IgniteConfiguration cfg = new IgniteConfiguration();
// Override default discovery SPI.
cfg.setDiscoverySpi(spi);
System.setProperty("IGNITE_QUIET", "false");
// Start Ignite node.
ignite = Ignition.start(cfg);
My implementation for AddressResolver is:
public class IotAddressResolver implements AddressResolver {
#Override
public Collection<InetSocketAddress> getExternalAddresses(
InetSocketAddress internalAddresses) throws IgniteCheckedException {
String host = "XX.XX.XX.XX";
Collection<InetSocketAddress> result = new ArrayList<InetSocketAddress>();
result.add(new InetSocketAddress(host, internalAddresses.getPort()));
return result;
}
}
The last line of the ignite debug log is:
WARNING: Timed out waiting for message to be read (most probably, the reason is in long GC pauses on remote node) [curTimeout=9989]
I will appreciate any help. Thank you
Can you provide more details about your deployment and what you're trying to achieve with the help of address resolver? How many physical hosts and Ignite nodes do you have? Are they located in different networks with the router between them?
I dont know if this is the best way to handle this but I managed to start igntie as local server. I am setting my local ip and port like this:
System.setProperty("IGNITE_QUIET", "false");
TcpDiscoverySpi spi = new TcpDiscoverySpi();
TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
TcpCommunicationSpi commSpi = new TcpCommunicationSpi();
// Set initial IP addresses.
ipFinder.setAddresses(ipaddresses);
spi.setIpFinder(ipFinder);
// Override local port.
commSpi.setLocalPort(47501);
commSpi.setLocalAddress("XX.XX.XX.XX");
commSpi.setLocalPortRange(50);
IgniteConfiguration cfg = new IgniteConfiguration();
// Override default communication SPI.
cfg.setCommunicationSpi(commSpi);
cfg.setDiscoverySpi(spi);
cfg.setAddressResolver(new IotAddressResolver());
cfg.setClientMode(true);
// Start Ignite node
ignite = Ignition.start(cfg);
Where XX.XX.XX.XX is my local IP address

Resources