MapConfig in Hazelcast Jet - hazelcast

I was using below configuration in Hazelcast IMDG.
Now I want to use same configuration with jet also.
#Bean
public static Config config() {
System.err.println("config class");
Config config = new Config();
config.setInstanceName("hazelcast");
MapConfig mapCfg = new MapConfig();
mapCfg.setName("t1");
mapCfg.setBackupCount(2);
mapCfg.setTimeToLiveSeconds(300);
MapStoreConfig mapStoreCfg = new MapStoreConfig();
mapStoreCfg.setClassName(PersonMapStore.class.getName()).setEnabled(true);
mapCfg.setMapStoreConfig(mapStoreCfg);
config.addMapConfig(mapCfg);
return config;
}
How to set MapConfig for hazelcast jet.
JetInstance jet = Jet.newJetInstance(null);
IMap<String, Person> map2 = jet .getMap("t1");

The method Jet.newJetInstance(JetConfig) takes JetConfig object, which has setHazelcastConfig method to set config for IMDG:
#Bean
public static Config config() {
...
}
#Bean
public static JetConfig jetConfig() {
JetConfig jetConfig = new JetConfig();
jetConfig.setHazelcastConfig(config());
...
return jetConfig;
}

Related

same file not picked up by multiple hosts

I have an app hosted multiple hosts listening to single remote SFTP location. How should i make sure same file is not picked by an host which is already picked up by other? I am pretty new to spring integration. Appreciate someone can share examples
}
EDIT:
Here is my integration flow getting file from sftp and placing in local directory and performing business logic in transformer and returning file and send it to remote sftp.
#Bean
public SessionFactory<ChannelSftp.LsEntry> sftpSessionFactory() {
LOGGER.debug(" Creating SFTP Session Factory -Start");
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
factory.setHost(sftpHost);
factory.setUser(sftpUser);
factory.setPort(port);
factory.setPassword(sftpPassword);
factory.setAllowUnknownKeys(true);
return new CachingSessionFactory<>(factory);
}
#Bean
public SftpInboundFileSynchronizer sftpInboundFileSynchronizer() {
SftpInboundFileSynchronizer fileSynchronizer = new SftpInboundFileSynchronizer(sftpSessionFactory());
fileSynchronizer.setDeleteRemoteFiles(true);
fileSynchronizer.setRemoteDirectory(sftpInboundDirectory);
fileSynchronizer.setFilter(new SftpPersistentAcceptOnceFileListFilter(store(), "*.json"));
return fileSynchronizer;
}
#Bean(name = PollerMetadata.DEFAULT_POLLER)
public PollerMetadata defaultPoller() {
PollerMetadata pollerMetadata = new PollerMetadata();
pollerMetadata.setTrigger(new PeriodicTrigger(5000));
return pollerMetadata;
}
#Bean
#InboundChannelAdapter(channel = "fileInputChannel", poller = #Poller(fixedDelay = "5000"))
public MessageSource<File> sftpMessageSource() {
SftpInboundFileSynchronizingMessageSource source =
new SftpInboundFileSynchronizingMessageSource(sftpInboundFileSynchronizer());
source.setLocalDirectory(localDirectory);
source.setAutoCreateLocalDirectory(true);
source.setLocalFilter(new AcceptOnceFileListFilter<File>());
source.setMaxFetchSize(1);
return source;
}
#Bean
IntegrationFlow integrationFlow() {
return IntegrationFlows.from(this.sftpMessageSource()).channel(fileInputChannel()).
transform(this::messageTransformer).channel(fileOutputChannel()).
handle(orderOutMessageHandler()).get();
}
#Bean
#ServiceActivator(inputChannel = "fileOutputChannel")
public SftpMessageHandler orderOutMessageHandler() {
SftpMessageHandler handler = new SftpMessageHandler(sftpSessionFactory());
LOGGER.debug(" Creating SFTP MessageHandler - Start ");
handler.setRemoteDirectoryExpression(new LiteralExpression(sftpOutboundDirectory));
handler.setFileNameGenerator(new FileNameGenerator() {
#Override
public String generateFileName(Message<?> message) {
if (message.getPayload() instanceof File) {
return ((File) message.getPayload()).getName();
} else {
throw new IllegalArgumentException("Expected Input is File.");
}
}
});
LOGGER.debug(" Creating SFTP MessageHandler - End ");
return handler;
}
#Bean
#org.springframework.integration.annotation.Transformer(inputChannel = "fileInputChannel", outputChannel = "fileOutputChannel")
public Transformer messageTransformer() {
return message -> {
File file=orderTransformer.transformInboundMessage(message);
return (Message<?>) file;
};
}
#Bean
public ConcurrentMetadataStore store() {
return new SimpleMetadataStore(hazelcastInstance().getMap("idempotentReceiverMetadataStore"));
}
#Bean
public HazelcastInstance hazelcastInstance() {
return Hazelcast.newHazelcastInstance(new Config().setProperty("hazelcast.logging.type", "slf4j"));
See SftpPersistentAcceptOnceFileListFilter to be injected into the SFTP Inbound Channel Adapter. This one has to be supplied with a MetadataStore based on the shared database.
See more info in the docs:
https://docs.spring.io/spring-integration/docs/current/reference/html/system-management.html#metadata-store
https://docs.spring.io/spring-integration/docs/current/reference/html/sftp.html#sftp-inbound

spring integration : solutions/tips on connect multiple sftp server?

My spring batch project needs to download files from multiple sftp servers.
the sftp host/port/filePath is config in application.properties file. I consider using the spring integration 'sftp out-bound gateway' to connect these servers and download files. but Im don't how to do this kind of configuration(I'm using java config, ) and make it work? i guess I need some way to define multiple session factory according to the number of sftp server info config in application.properties file.
properties file:
sftp.host=host1,host2
sftp.user=user1,user2
sftp.pwd=pwd1,pwd2
config class:
#Bean
public SessionFactory<ChannelSftp.LsEntry> sftpSessionFactory1() {
...
}
#Bean(name = "myGateway1")
#ServiceActivator(inputChannel = "sftpChannel1")
public MessageHandler handler1() {
...
}
#MessagingGateway
public interface DownloadGateway1 {
#Gateway(requestChannel = "sftpChannel1")
List<File> start(String dir);
}
#Bean(name="sftpChannel1")
public MessageChannel sftpChannel1() {
return new DirectChannel();
}
Right, the server is specified in the session factory, not the gateway. The framework does provide a delegating session factory, allowing it to be selected from one of the configured factories for each message sent to the gateway. See Delegating Session Factory.
EDIT
Here's an example:
#SpringBootApplication
public class So46721822Application {
public static void main(String[] args) {
SpringApplication.run(So46721822Application.class, args);
}
#Value("${sftp.name}")
private String[] names;
#Value("${sftp.host}")
private String[] hosts;
#Value("${sftp.user}")
private String[] users;
#Value("${sftp.pwd}")
private String[] pwds;
#Autowired
private DelegatingSessionFactory<?> sessionFactory;
#Autowired
private SftpGateway gateway;
#Bean
public ApplicationRunner runner() {
return args -> {
try {
this.sessionFactory.setThreadKey("one"); // use factory "one"
this.gateway.send(new File("/tmp/f.txt"));
}
finally {
this.sessionFactory.clearThreadKey();
}
};
}
#Bean
public DelegatingSessionFactory<LsEntry> sessionFactory() {
Map<Object, SessionFactory<LsEntry>> factories = new LinkedHashMap<>();
for (int i = 0; i < this.names.length; i++) {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory();
factory.setHost(this.hosts[i]);
factory.setUser(this.users[i]);
factory.setPassword(this.pwds[i]);
factories.put(this.names[i], factory);
}
// use the first SF as the default
return new DelegatingSessionFactory<LsEntry>(factories, factories.values().iterator().next());
}
#ServiceActivator(inputChannel = "toSftp")
#Bean
public SftpMessageHandler handler() {
SftpMessageHandler handler = new SftpMessageHandler(sessionFactory());
handler.setRemoteDirectoryExpression(new LiteralExpression("foo"));
return handler;
}
#MessagingGateway(defaultRequestChannel = "toSftp")
public interface SftpGateway {
void send(File file);
}
}
with properties...
sftp.name=one,two
sftp.host=host1,host2
sftp.user=user1,user2
sftp.pwd=pwd1,pwd2

Spring Cloud App Starter, sftp source, recurse a directory for files

I am using SFTP Source in Spring cloud dataflow and it is working for getting files define in sftp:remote-dir:/home/someone/source , Now I have a many subfolders under the remote-dir and I want to recursively get all the files under this directory which match the patten. I am trying to use filename-regex: but so far it only works on one level. How do I recursively get the files I need.
The inbound channel adapter does not support recursion; use a custom source with the outbound gateway with an MGET command, with recursion (-R).
The doc is missing that option; fixed in the current docs.
I opened an issue to create a standard app starter.
EDIT
With the Java DSL...
#SpringBootApplication
#EnableBinding(Source.class)
public class So44710754Application {
public static void main(String[] args) {
SpringApplication.run(So44710754Application.class, args);
}
// should store in Redis or similar for persistence
private final ConcurrentMap<String, Boolean> processed = new ConcurrentHashMap<>();
#Bean
public IntegrationFlow flow() {
return IntegrationFlows.from(source(), e -> e.poller(Pollers.fixedDelay(30_000)))
.handle(gateway())
.split()
.<File>filter(p -> this.processed.putIfAbsent(p.getAbsolutePath(), true) == null)
.transform(Transformers.fileToByteArray())
.channel(Source.OUTPUT)
.get();
}
private MessageSource<String> source() {
return () -> new GenericMessage<>("foo/*");
}
private AbstractRemoteFileOutboundGateway<LsEntry> gateway() {
AbstractRemoteFileOutboundGateway<LsEntry> gateway = Sftp.outboundGateway(sessionFactory(), "mget", "payload")
.localDirectory(new File("/tmp/foo"))
.options(Option.RECURSIVE)
.get();
gateway.setFileExistsMode(FileExistsMode.IGNORE);
return gateway;
}
private SessionFactory<LsEntry> sessionFactory() {
DefaultSftpSessionFactory sf = new DefaultSftpSessionFactory();
sf.setHost("10.0.0.3");
sf.setUser("ftptest");
sf.setPassword("ftptest");
sf.setAllowUnknownKeys(true);
return new CachingSessionFactory<>(sf);
}
}
And with Java config...
#SpringBootApplication
#EnableBinding(Source.class)
public class So44710754Application {
public static void main(String[] args) {
SpringApplication.run(So44710754Application.class, args);
}
#InboundChannelAdapter(channel = "sftpGate", poller = #Poller(fixedDelay = "30000"))
public String remoteDir() {
return "foo/*";
}
#Bean
#ServiceActivator(inputChannel = "sftpGate")
public SftpOutboundGateway mgetGate() {
SftpOutboundGateway sftpOutboundGateway = new SftpOutboundGateway(sessionFactory(), "mget", "payload");
sftpOutboundGateway.setOutputChannelName("splitterChannel");
sftpOutboundGateway.setFileExistsMode(FileExistsMode.IGNORE);
sftpOutboundGateway.setLocalDirectory(new File("/tmp/foo"));
sftpOutboundGateway.setOptions("-R");
return sftpOutboundGateway;
}
#Bean
#Splitter(inputChannel = "splitterChannel")
public DefaultMessageSplitter splitter() {
DefaultMessageSplitter splitter = new DefaultMessageSplitter();
splitter.setOutputChannelName("filterChannel");
return splitter;
}
// should store in Redis, Zookeeper, or similar for persistence
private final ConcurrentMap<String, Boolean> processed = new ConcurrentHashMap<>();
#Filter(inputChannel = "filterChannel", outputChannel = "toBytesChannel")
public boolean filter(File payload) {
return this.processed.putIfAbsent(payload.getAbsolutePath(), true) == null;
}
#Bean
#Transformer(inputChannel = "toBytesChannel", outputChannel = Source.OUTPUT)
public FileToByteArrayTransformer toBytes() {
FileToByteArrayTransformer transformer = new FileToByteArrayTransformer();
return transformer;
}
private SessionFactory<LsEntry> sessionFactory() {
DefaultSftpSessionFactory sf = new DefaultSftpSessionFactory();
sf.setHost("10.0.0.3");
sf.setUser("ftptest");
sf.setPassword("ftptest");
sf.setAllowUnknownKeys(true);
return new CachingSessionFactory<>(sf);
}
}

Hazelcast Client - stuck thread

I'm running into a hung thread issue using Hazelcast 3.5.1
My applications will run and then silently stop working.
It appears that I have multiple threads in the HZ client that are stuck.
Client Trace
State:TIMED_WAITING
Priority:5
java.lang.Object.wait(Native Method)
com.hazelcast.client.spi.impl.ClientInvocationFuture.get(ClientInvocationFuture.java:104)
com.hazelcast.client.spi.impl.ClientInvocationFuture.get(ClientInvocationFuture.java:89)
com.hazelcast.client.spi.ClientProxy.invoke(ClientProxy.java:130)
com.hazelcast.client.proxy.ClientMapProxy.get(ClientMapProxy.java:197)
Server Error
[ERROR] [2015-07-29 18:20:12,812] [hz._hzInstance_1_dev.partition-operation.thread-0] [][c.h.m.i.o.GetOperation] [[198.47.158.82]:5900 [dev] [3.5.1] io.protostuff.UninitializedMessageException]
com.hazelcast.nio.serialization.HazelcastSerializationException: io.protostuff.UninitializedMessageException
at com.hazelcast.nio.serialization.SerializationServiceImpl.handleException(SerializationServiceImpl.java:380) ~[hazelcast-3.5.1.jar:3.5.1]
at com.hazelcast.nio.serialization.SerializationServiceImpl.toData(SerializationServiceImpl.java:235) ~[hazelcast-3.5.1.jar:3.5.1]
at com.hazelcast.map.impl.record.DataRecordFactory.newRecord(DataRecordFactory.java:47) ~[hazelcast-3.5.1.jar:3.5.1]
Client Config
public ClientConfig config() {
final ClientConfig config = new ClientConfig();
config.setExecutorPoolSize(100);
setupLoggingConfig(config);
setupNetworkConfig(config);
setupGroupConfig(config);
setupSerializationConfig(config);
setupAdvancedConfig(config);
return config;
}
private void setupAdvancedConfig(final ClientConfig config) {
config.setProperty(GroupProperties.PROP_OPERATION_CALL_TIMEOUT_MILLIS, String.valueOf(5000));
}
private void setupLoggingConfig(final ClientConfig config) {
config.setProperty("hazelcast.logging.type", "slf4j");
}
private void setupNetworkConfig(final ClientConfig config) {
final ClientNetworkConfig networkConfig = config.getNetworkConfig();
networkConfig.setConnectionTimeout(1000);
networkConfig.setConnectionAttemptPeriod(3000);
networkConfig.setConnectionAttemptLimit(2);
networkConfig.setRedoOperation(true);
networkConfig.setSmartRouting(true);
setupNetworkSocketConfig(networkConfig);
}
private void setupNetworkSocketConfig(final ClientNetworkConfig networkConfig) {
final SocketOptions socketOptions = networkConfig.getSocketOptions();
socketOptions.setKeepAlive(false);
socketOptions.setBufferSize(32);
socketOptions.setLingerSeconds(3);
socketOptions.setReuseAddress(false);
socketOptions.setTcpNoDelay(false);
}
Server Config
private void init(final Config config) {
setupExecutorConfig(config);
setupLoggingConfig(config);
setupMapConfigs(config);
setupNetworkConfig(config);
setupGroupConfig(config);
setupAdvancedConfig(config);
setupSerializationConfig(config);
}
private void setupAdvancedConfig(final Config config) {
config.setProperty(GroupProperties.PROP_OPERATION_CALL_TIMEOUT_MILLIS, String.valueOf(5000));
}
private void setupExecutorConfig(final Config config) {
final ExecutorConfig executorConfig = new ExecutorConfig();
executorConfig.setPoolSize(300);
config.addExecutorConfig(executorConfig);
}
private void setupLoggingConfig(final Config config) {
config.setProperty("hazelcast.logging.type", "slf4j");
}
private void setupNetworkConfig(final Config config) {
final NetworkConfig networkCfg = config.getNetworkConfig();
networkCfg.setPort(5900);
networkCfg.setPortAutoIncrement(false);
final JoinConfig join = networkCfg.getJoin();
join.getMulticastConfig().setEnabled(false);
for (final String server : getServers()) {
join.getTcpIpConfig().addMember(server);
}
}
private String[] getServers() {
return PROPS.getProperty("store.servers").split(",");
}
private void setupMapConfigs(final Config config) {
setupMapConfigXXX(config);
}
private void setupMapConfigXXX(final Config config) {
final MapConfig mapConfig = setupMapConfigByName(config, XXX.class.getName());
setupMapStoreConfigDummy(mapConfig);
setupEvictionPolicy(mapConfig);
}
private void setupMapStoreConfigDummy(final MapConfig mapConfig) {
final MapStoreConfig mapStoreConfig = new MapStoreConfig();
mapStoreConfig.setClassName(DummyStore.class.getName()).setEnabled(true);
mapConfig.setMapStoreConfig(mapStoreConfig);
}
private void setupEvictionPolicy(final MapConfig mapConfig) {
mapConfig.setEvictionPolicy(EvictionPolicy.LFU);
mapConfig.setMaxSizeConfig(oneGBSize());
}
private MapConfig setupMapConfigByName(final Config config, final String mapName) {
final MapConfig mapConfig = new MapConfig();
mapConfig.setName(mapName);
mapConfig.setBackupCount(1);
final NearCacheConfig nearCacheConfig = new NearCacheConfig();
nearCacheConfig.setMaxSize(1000).setMaxIdleSeconds(300).setTimeToLiveSeconds(300);
mapConfig.setNearCacheConfig(nearCacheConfig);
config.addMapConfig(mapConfig);
return mapConfig;
}
private MaxSizeConfig oneGBSize() {
final MaxSizeConfig config = new MaxSizeConfig();
config.setMaxSizePolicy(MaxSizePolicy.USED_HEAP_SIZE);
config.setSize(1024);
return config;
}
I would expect the client to timeout but that doesn't appear to be happening.
I believe you should configure the client-side timeout via the property ClientProperties.PROP_INVOCATION_TIMEOUT_SECONDS
However it's just a band-aid. You should find out the real root cause why your serialization is failing.

How to expire Hazelcast session

I'm using spring-session libs to persist the session on Hazelcast like :
1.
#WebListener
public class HazelcastInitializer implements ServletContextListener {
private HazelcastInstance instance;
#Override
public void contextInitialized(ServletContextEvent sce) {
String sessionMapName = "spring:session:sessions";
ServletContext sc = sce.getServletContext();
ClientConfig clientConfig = new ClientConfig();
clientConfig.getGroupConfig().setName("nameValue").setPassword("passValue");
clientConfig.getNetworkConfig().addAddress("ipValue");
clientConfig.getNetworkConfig().setSmartRouting(true);
Collection<SerializerConfig> scfg = new ArrayList<SerializerConfig>();
SerializerConfig serializer = new SerializerConfig()
.setTypeClass(Object.class)
.setImplementation(new ObjectStreamSerializer());
scfg.add(serializer);
clientConfig.getSerializationConfig().setSerializerConfigs(scfg);
instance = HazelcastClient.newHazelcastClient(clientConfig);
Map<String, ExpiringSession> sessions = instance.getMap(sessionMapName);
SessionRepository<ExpiringSession> sessionRepository
= new MapSessionRepository(sessions);
SessionRepositoryFilter<ExpiringSession> filter
= new SessionRepositoryFilter<ExpiringSession>(sessionRepository);
Dynamic fr = sc.addFilter("springSessionFilter", filter);
fr.addMappingForUrlPatterns(EnumSet.of(DispatcherType.REQUEST), true, "/*");
}
#Override
public void contextDestroyed(ServletContextEvent sce) {
if (instance != null) {
instance.shutdown();
}
}
}
How can i expire the session on Hazelcast ( on Hazelcast Management the number of sessions entries allways incrementing ) ?
You can add ttl to map config. So inactive sessions are evicted after some timeout. You can see an example here:
https://github.com/spring-projects/spring-session/blob/1.0.0.RELEASE/samples/hazelcast/src/main/java/sample/Initializer.java#L59
Also i guess, this sample application is what you want.

Resources