Hazelcast cluster not starting - hazelcast

I'm not able to see members added to the cluster when I start from multiple ports. Below is just the basic configuration. Each seems to have its own port.
#SpringBootApplication
#Configuration
public class HazelcastApplication {
public static void main(String[] args) {
SpringApplication.run(HazelcastApplication.class, args);
}
#Bean(destroyMethod = "shutdown")
public HazelcastInstance createStorageNode() throws Exception {
return Hazelcast.newHazelcastInstance();
}
}
Members [1] {
Member [169.254.137.152]:5702 this
}
Members [1] {
Member [169.254.137.152]:5701 this
}

It's possible that you have multiple network interfaces on the machine you're running on while running multicast. Modify your above method to:
#Bean(destroyMethod = "shutdown")
public HazelcastInstance createStorageNode() throws Exception {
Config config = new Config();
JoinConfig joinConfig = config.getNetworkConfig().getJoin();
joinConfig.getMulticastConfig().setEnabled(false);
joinConfig.getTcpIpConfig().setEnabled(true)
.getMembers()
.add("127.0.0.1");
//.add("169.254.137.152"); // or this
Hazelcast.newHazelcastInstance(config);
}

Related

How to implement distributed lock around poller in Spring Integration using ZooKeeper

Spring Integration has ZooKeeper support as documented in https://docs.spring.io/spring-integration/reference/html/zookeeper.html
However this document is so vague.
It suggests adding below bean but does not give details on how to start/stop a poller when the node is granted leadership.
#Bean
public LeaderInitiatorFactoryBean leaderInitiator(CuratorFramework client) {
return new LeaderInitiatorFactoryBean()
.setClient(client)
.setPath("/siTest/")
.setRole("cluster");
}
Do we have any example on how to ensure below poller is run only once in a cluster at any time using zookeeper?
#Component
public class EventsPoller {
public void pullEvents() {
//pull events should be run by only one node in the cluster at any time
}
}
The LeaderInitiator emits an OnGrantedEvent and OnRevokedEvent, when it becomes leader and its leadership is revoked.
See https://docs.spring.io/spring-integration/reference/html/messaging-endpoints-chapter.html#endpoint-roles and the next https://docs.spring.io/spring-integration/reference/html/messaging-endpoints-chapter.html#leadership-event-handling for more info about those events handling and how it affects your components in the particular role.
Although I agree that Zookkeper chapter must have some link to that SmartLifecycleRoleController chapter. Feel free to raise a JIRA on the matter and contribution is welcome!
UPDATE
This is what I did in our test:
#RunWith(SpringRunner.class)
#DirtiesContext
public class LeaderInitiatorFactoryBeanTests extends ZookeeperTestSupport {
private static CuratorFramework client;
#Autowired
private PollableChannel stringsChannel;
#BeforeClass
public static void getClient() throws Exception {
client = createNewClient();
}
#AfterClass
public static void closeClient() {
if (client != null) {
client.close();
}
}
#Test
public void test() {
assertNotNull(this.stringsChannel.receive(10_000));
}
#Configuration
#EnableIntegration
public static class Config {
#Bean
public LeaderInitiatorFactoryBean leaderInitiator(CuratorFramework client) {
return new LeaderInitiatorFactoryBean()
.setClient(client)
.setPath("/siTest/")
.setRole("foo");
}
#Bean
public CuratorFramework client() {
return LeaderInitiatorFactoryBeanTests.client;
}
#Bean
#InboundChannelAdapter(channel = "stringsChannel", autoStartup = "false", poller = #Poller(fixedDelay = "100"))
#Role("foo")
public Supplier<String> inboundChannelAdapter() {
return () -> "foo";
}
#Bean
public PollableChannel stringsChannel() {
return new QueueChannel();
}
}
}
And I have in logs something like this:
2018-12-14 10:12:33,542 DEBUG [Curator-LeaderSelector-0] [org.springframework.integration.support.SmartLifecycleRoleController] - Starting [leaderInitiatorFactoryBeanTests.Config.inboundChannelAdapter.inboundChannelAdapter] in role foo
2018-12-14 10:12:33,578 DEBUG [Curator-LeaderSelector-0] [org.springframework.integration.support.SmartLifecycleRoleController] - Stopping [leaderInitiatorFactoryBeanTests.Config.inboundChannelAdapter.inboundChannelAdapter] in role foo

spring integration : solutions/tips on connect multiple sftp server?

My spring batch project needs to download files from multiple sftp servers.
the sftp host/port/filePath is config in application.properties file. I consider using the spring integration 'sftp out-bound gateway' to connect these servers and download files. but Im don't how to do this kind of configuration(I'm using java config, ) and make it work? i guess I need some way to define multiple session factory according to the number of sftp server info config in application.properties file.
properties file:
sftp.host=host1,host2
sftp.user=user1,user2
sftp.pwd=pwd1,pwd2
config class:
#Bean
public SessionFactory<ChannelSftp.LsEntry> sftpSessionFactory1() {
...
}
#Bean(name = "myGateway1")
#ServiceActivator(inputChannel = "sftpChannel1")
public MessageHandler handler1() {
...
}
#MessagingGateway
public interface DownloadGateway1 {
#Gateway(requestChannel = "sftpChannel1")
List<File> start(String dir);
}
#Bean(name="sftpChannel1")
public MessageChannel sftpChannel1() {
return new DirectChannel();
}
Right, the server is specified in the session factory, not the gateway. The framework does provide a delegating session factory, allowing it to be selected from one of the configured factories for each message sent to the gateway. See Delegating Session Factory.
EDIT
Here's an example:
#SpringBootApplication
public class So46721822Application {
public static void main(String[] args) {
SpringApplication.run(So46721822Application.class, args);
}
#Value("${sftp.name}")
private String[] names;
#Value("${sftp.host}")
private String[] hosts;
#Value("${sftp.user}")
private String[] users;
#Value("${sftp.pwd}")
private String[] pwds;
#Autowired
private DelegatingSessionFactory<?> sessionFactory;
#Autowired
private SftpGateway gateway;
#Bean
public ApplicationRunner runner() {
return args -> {
try {
this.sessionFactory.setThreadKey("one"); // use factory "one"
this.gateway.send(new File("/tmp/f.txt"));
}
finally {
this.sessionFactory.clearThreadKey();
}
};
}
#Bean
public DelegatingSessionFactory<LsEntry> sessionFactory() {
Map<Object, SessionFactory<LsEntry>> factories = new LinkedHashMap<>();
for (int i = 0; i < this.names.length; i++) {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory();
factory.setHost(this.hosts[i]);
factory.setUser(this.users[i]);
factory.setPassword(this.pwds[i]);
factories.put(this.names[i], factory);
}
// use the first SF as the default
return new DelegatingSessionFactory<LsEntry>(factories, factories.values().iterator().next());
}
#ServiceActivator(inputChannel = "toSftp")
#Bean
public SftpMessageHandler handler() {
SftpMessageHandler handler = new SftpMessageHandler(sessionFactory());
handler.setRemoteDirectoryExpression(new LiteralExpression("foo"));
return handler;
}
#MessagingGateway(defaultRequestChannel = "toSftp")
public interface SftpGateway {
void send(File file);
}
}
with properties...
sftp.name=one,two
sftp.host=host1,host2
sftp.user=user1,user2
sftp.pwd=pwd1,pwd2

Using Hazelcast, how can i create event or catch if member is shutdown and print message?

This is my source,how can i print message if some of the members is shut down for some reason?I think i can some event or some kind of action listener but how...
import com.hazelcast.core.*;
import com.hazelcast.config.*;
import java.util.Map;
/** * * #author alvel */ public class ShutDown {
public static void main(String[] args) {
Config cfg = new Config();
HazelcastInstance memberOne = Hazelcast.newHazelcastInstance(cfg);
HazelcastInstance memberTwo = Hazelcast.newHazelcastInstance(cfg);
Map<Integer, String> customerMap = memberOne.getMap("customers");
customerMap.put(1, "google");
customerMap.put(2, "apple");
customerMap.put(3, "yahoo");
customerMap.put(4, "microsoft");
System.out.println("Hazelcast Nodes in this cluster"+Hazelcast.getAllHazelcastInstances().size());
memberOne.shutdown();
System.out.println("Hazelcast Nodes in this cluster After shutdown"+Hazelcast.getAllHazelcastInstances().size());
Map<Integer, String> customerRestored = memberTwo.getMap("customers");
for(String val:customerRestored.values()){
System.out.println("-"+val);
}
} }
Try this, it adds a few lines into your code and a new class
public class ShutDown {
static {
// ONLY TEMPORARY
System.setProperty("hazelcast.logging.type", "none");
}
public static void main(String[] args) {
Config cfg = new Config();
HazelcastInstance memberOne = Hazelcast.newHazelcastInstance(cfg);
//ADDED TO MEMBER ONE
memberOne.getCluster().addMembershipListener(new ShutDownMembershipListener());
HazelcastInstance memberTwo = Hazelcast.newHazelcastInstance(cfg);
//ADDED TO MEMBER TWO
memberTwo.getCluster().addMembershipListener(new ShutDownMembershipListener());
Map<Integer, String> customerMap = memberOne.getMap("customers");
customerMap.put(1, "google");
customerMap.put(2, "apple");
customerMap.put(3, "yahoo");
customerMap.put(4, "microsoft");
System.out.println("Hazelcast Nodes in this cluster"+Hazelcast.getAllHazelcastInstances().size());
memberOne.shutdown();
System.out.println("Hazelcast Nodes in this cluster After shutdown"+Hazelcast.getAllHazelcastInstances().size());
Map<Integer, String> customerRestored = memberTwo.getMap("customers");
for(String val:customerRestored.values()){
System.out.println("-"+val);
}
}
static class ShutDownMembershipListener implements MembershipListener {
#Override
public void memberAdded(MembershipEvent membershipEvent) {
System.out.println(this + membershipEvent.toString());
}
#Override
public void memberAttributeChanged(MemberAttributeEvent arg0) {
}
#Override
public void memberRemoved(MembershipEvent membershipEvent) {
System.out.println(this + membershipEvent.toString());
}
}
}
The line System.setProperty("hazelcast.logging.type", "none") is just for testing to make it simpler to see what is happening.

Registering AutoMapper with Unity fails

I have the following code to register Mapping (version 4.2)
public class ModelMapperProfile : Profile
{
protected override void Configure()
{
CreateMap<Case, CaseModel>();
CreateMap<CaseDetail, CaseDetailModel>();
}
}
public static class AutoMapperService
{
public static MapperConfiguration Initialize()
{
MapperConfiguration config = new MapperConfiguration(cfg =>
{
cfg.AddProfile<ModelMapperProfile>();
});
return config;
}
}
And I register the dependency using unity as follows...
public static void RegisterTypes(IUnityContainer container)
{
container.LoadConfiguration();
var mapper = AutoMapperService.Initialize()
.CreateMapper();
container.RegisterInstance<IMapper>(mapper);
}
My here service constructor..
public TaxLiabilityCaseService(IMapper mapper,
IUnitOfWork unitofWork,
IRepository<Case> caseR,
IRepository<CaseDetail> caseDetailR)
{
_mapper = mapper;
_unitofWork = unitofWork;
_caseR = caseR;
_caseDetailR = caseDetailR;
}
And I get the following error message..
The current type, AutoMapper.IMapper, is an interface and cannot be
constructed. Are you missing a type mapping?
Answers found here did not work for me
What am I missing here
Try following these steps (MVC5):
Get Unity Nuget package:
Unity.Mvc5
Create this class:
public class MapperConfig
{
public static IMapper Mapper { get; set; }
public static void RegisterProfiles()
{
var config = new MapperConfiguration(cfg =>
{
// add profiles here
});
config.AssertConfigurationIsValid();
Mapper = config.CreateMapper();
}
}
In the UnityConfig file (created by the package), add this:
public static void RegisterComponents()
{
var container = new UnityContainer();
container.RegisterInstance<IMapper>(MapperConfig.Mapper);
}
In the Global.asax, add these:
protected void Application_Start()
{
MapperConfig.RegisterProfiles();
UnityConfig.RegisterComponents();
}
You should be good after this.

Hazelcast Client - stuck thread

I'm running into a hung thread issue using Hazelcast 3.5.1
My applications will run and then silently stop working.
It appears that I have multiple threads in the HZ client that are stuck.
Client Trace
State:TIMED_WAITING
Priority:5
java.lang.Object.wait(Native Method)
com.hazelcast.client.spi.impl.ClientInvocationFuture.get(ClientInvocationFuture.java:104)
com.hazelcast.client.spi.impl.ClientInvocationFuture.get(ClientInvocationFuture.java:89)
com.hazelcast.client.spi.ClientProxy.invoke(ClientProxy.java:130)
com.hazelcast.client.proxy.ClientMapProxy.get(ClientMapProxy.java:197)
Server Error
[ERROR] [2015-07-29 18:20:12,812] [hz._hzInstance_1_dev.partition-operation.thread-0] [][c.h.m.i.o.GetOperation] [[198.47.158.82]:5900 [dev] [3.5.1] io.protostuff.UninitializedMessageException]
com.hazelcast.nio.serialization.HazelcastSerializationException: io.protostuff.UninitializedMessageException
at com.hazelcast.nio.serialization.SerializationServiceImpl.handleException(SerializationServiceImpl.java:380) ~[hazelcast-3.5.1.jar:3.5.1]
at com.hazelcast.nio.serialization.SerializationServiceImpl.toData(SerializationServiceImpl.java:235) ~[hazelcast-3.5.1.jar:3.5.1]
at com.hazelcast.map.impl.record.DataRecordFactory.newRecord(DataRecordFactory.java:47) ~[hazelcast-3.5.1.jar:3.5.1]
Client Config
public ClientConfig config() {
final ClientConfig config = new ClientConfig();
config.setExecutorPoolSize(100);
setupLoggingConfig(config);
setupNetworkConfig(config);
setupGroupConfig(config);
setupSerializationConfig(config);
setupAdvancedConfig(config);
return config;
}
private void setupAdvancedConfig(final ClientConfig config) {
config.setProperty(GroupProperties.PROP_OPERATION_CALL_TIMEOUT_MILLIS, String.valueOf(5000));
}
private void setupLoggingConfig(final ClientConfig config) {
config.setProperty("hazelcast.logging.type", "slf4j");
}
private void setupNetworkConfig(final ClientConfig config) {
final ClientNetworkConfig networkConfig = config.getNetworkConfig();
networkConfig.setConnectionTimeout(1000);
networkConfig.setConnectionAttemptPeriod(3000);
networkConfig.setConnectionAttemptLimit(2);
networkConfig.setRedoOperation(true);
networkConfig.setSmartRouting(true);
setupNetworkSocketConfig(networkConfig);
}
private void setupNetworkSocketConfig(final ClientNetworkConfig networkConfig) {
final SocketOptions socketOptions = networkConfig.getSocketOptions();
socketOptions.setKeepAlive(false);
socketOptions.setBufferSize(32);
socketOptions.setLingerSeconds(3);
socketOptions.setReuseAddress(false);
socketOptions.setTcpNoDelay(false);
}
Server Config
private void init(final Config config) {
setupExecutorConfig(config);
setupLoggingConfig(config);
setupMapConfigs(config);
setupNetworkConfig(config);
setupGroupConfig(config);
setupAdvancedConfig(config);
setupSerializationConfig(config);
}
private void setupAdvancedConfig(final Config config) {
config.setProperty(GroupProperties.PROP_OPERATION_CALL_TIMEOUT_MILLIS, String.valueOf(5000));
}
private void setupExecutorConfig(final Config config) {
final ExecutorConfig executorConfig = new ExecutorConfig();
executorConfig.setPoolSize(300);
config.addExecutorConfig(executorConfig);
}
private void setupLoggingConfig(final Config config) {
config.setProperty("hazelcast.logging.type", "slf4j");
}
private void setupNetworkConfig(final Config config) {
final NetworkConfig networkCfg = config.getNetworkConfig();
networkCfg.setPort(5900);
networkCfg.setPortAutoIncrement(false);
final JoinConfig join = networkCfg.getJoin();
join.getMulticastConfig().setEnabled(false);
for (final String server : getServers()) {
join.getTcpIpConfig().addMember(server);
}
}
private String[] getServers() {
return PROPS.getProperty("store.servers").split(",");
}
private void setupMapConfigs(final Config config) {
setupMapConfigXXX(config);
}
private void setupMapConfigXXX(final Config config) {
final MapConfig mapConfig = setupMapConfigByName(config, XXX.class.getName());
setupMapStoreConfigDummy(mapConfig);
setupEvictionPolicy(mapConfig);
}
private void setupMapStoreConfigDummy(final MapConfig mapConfig) {
final MapStoreConfig mapStoreConfig = new MapStoreConfig();
mapStoreConfig.setClassName(DummyStore.class.getName()).setEnabled(true);
mapConfig.setMapStoreConfig(mapStoreConfig);
}
private void setupEvictionPolicy(final MapConfig mapConfig) {
mapConfig.setEvictionPolicy(EvictionPolicy.LFU);
mapConfig.setMaxSizeConfig(oneGBSize());
}
private MapConfig setupMapConfigByName(final Config config, final String mapName) {
final MapConfig mapConfig = new MapConfig();
mapConfig.setName(mapName);
mapConfig.setBackupCount(1);
final NearCacheConfig nearCacheConfig = new NearCacheConfig();
nearCacheConfig.setMaxSize(1000).setMaxIdleSeconds(300).setTimeToLiveSeconds(300);
mapConfig.setNearCacheConfig(nearCacheConfig);
config.addMapConfig(mapConfig);
return mapConfig;
}
private MaxSizeConfig oneGBSize() {
final MaxSizeConfig config = new MaxSizeConfig();
config.setMaxSizePolicy(MaxSizePolicy.USED_HEAP_SIZE);
config.setSize(1024);
return config;
}
I would expect the client to timeout but that doesn't appear to be happening.
I believe you should configure the client-side timeout via the property ClientProperties.PROP_INVOCATION_TIMEOUT_SECONDS
However it's just a band-aid. You should find out the real root cause why your serialization is failing.

Resources