I'm using spring-session libs to persist the session on Hazelcast like :
1.
#WebListener
public class HazelcastInitializer implements ServletContextListener {
private HazelcastInstance instance;
#Override
public void contextInitialized(ServletContextEvent sce) {
String sessionMapName = "spring:session:sessions";
ServletContext sc = sce.getServletContext();
ClientConfig clientConfig = new ClientConfig();
clientConfig.getGroupConfig().setName("nameValue").setPassword("passValue");
clientConfig.getNetworkConfig().addAddress("ipValue");
clientConfig.getNetworkConfig().setSmartRouting(true);
Collection<SerializerConfig> scfg = new ArrayList<SerializerConfig>();
SerializerConfig serializer = new SerializerConfig()
.setTypeClass(Object.class)
.setImplementation(new ObjectStreamSerializer());
scfg.add(serializer);
clientConfig.getSerializationConfig().setSerializerConfigs(scfg);
instance = HazelcastClient.newHazelcastClient(clientConfig);
Map<String, ExpiringSession> sessions = instance.getMap(sessionMapName);
SessionRepository<ExpiringSession> sessionRepository
= new MapSessionRepository(sessions);
SessionRepositoryFilter<ExpiringSession> filter
= new SessionRepositoryFilter<ExpiringSession>(sessionRepository);
Dynamic fr = sc.addFilter("springSessionFilter", filter);
fr.addMappingForUrlPatterns(EnumSet.of(DispatcherType.REQUEST), true, "/*");
}
#Override
public void contextDestroyed(ServletContextEvent sce) {
if (instance != null) {
instance.shutdown();
}
}
}
How can i expire the session on Hazelcast ( on Hazelcast Management the number of sessions entries allways incrementing ) ?
You can add ttl to map config. So inactive sessions are evicted after some timeout. You can see an example here:
https://github.com/spring-projects/spring-session/blob/1.0.0.RELEASE/samples/hazelcast/src/main/java/sample/Initializer.java#L59
Also i guess, this sample application is what you want.
Related
I am trying to connect to 2 different cassandra clusters using spring data cassandra. But it always uses only the first cassandra cluster config. The second one is not taking affect. Any idea what am I doing wrong? This is the config that I am using:
First cassandra cluster config:
#Configuration
#EnableCassandraRepositories(
basePackageClasses = SourceRepository.class
)
public class SourceCassandraConfig extends AbstractCassandraConfiguration {
#Override
public String getContactPoints() {
return "localhost";
}
#Override
public int getPort() {
return "9051";
}
#Override
protected String getKeyspaceName() {
return "source_keyspace";
}
}
Second cassandra cluster config:
#Configuration
#EnableCassandraRepositories(
basePackageClasses = TargetRepository.class,
cassandraTemplateRef = "targetCassandraTemplate"
)
public class TargetCassandraConfig extends AbstractCassandraConfiguration {
#Override
public String getContactPoints() {
return "localhost";
}
#Override
public int getPort() {
return "9052";
}
#Override
protected String getKeyspaceName() {
return "target_keyspace";
}
#Override
#Bean("targetSession")
public CassandraSessionFactoryBean session() throws ClassNotFoundException {
final CassandraSessionFactoryBean session = super.session();
session.setKeyspaceName(getKeyspaceName());
session.setCluster(cluster().getObject());
return session;
}
#Override
public CassandraCqlClusterFactoryBean cluster() {
CassandraCqlClusterFactoryBean cluster = super.cluster();
cluster.setContactPoints(contactPoints);
cluster.setPort(port);
return cluster;
}
#Bean("targetCassandraTemplate")
public CassandraAdminOperations cassandraTemplate(
#Qualifier("targetSession") final CassandraSessionFactoryBean session) throws Exception {
return new CassandraAdminTemplate(session.getObject(), cassandraConverter());
}
}
I always see that only the first cluster node is getting added
com.datastax.driver.core.Cluster : New Cassandra host localhost/127.0.0.1:9051 added
What am I doing wrong?
I spent 2 days debugging this and 10 mins after posting this question, I found the fix :)
I wasn't using the cluster bean that I created properly in the session bean. So I did the following and it worked:
#Override
#Bean("targetCassandraCluster")
public CassandraCqlClusterFactoryBean cluster() {
CassandraCqlClusterFactoryBean cluster = super.cluster();
cluster.setContactPoints(contactPoints);
cluster.setPort(port);
return cluster;
}
#Bean("targetCassandraSession")
public CassandraSessionFactoryBean session(
#Qualifier("targetCassandraCluster") final CassandraCqlClusterFactoryBean cluster
) throws ClassNotFoundException {
final CassandraSessionFactoryBean session = super.session();
session.setKeyspaceName(getKeyspaceName());
session.setCluster(cluster.getObject());
return session;
}
#Bean("targetCassandraTemplate")
public CassandraAdminOperations cassandraTemplate(
#Qualifier("targetCassandraSession") final CassandraSessionFactoryBean session) throws Exception {
return new CassandraAdminTemplate(session.getObject(), cassandraConverter());
}
I have done some tests on these two classes. Could someone please help to determine if these two classes are threadsafe? Could someone help to identify if not using concurrentHashMap, but use HashMap would it cause any concurrent issue. How can I make it more threadsafe? What is the best approach to testing it with concurrent testing?
I tested it with Hashmap only and it works fine. However, my scale of test is around 20 req/s for 2 mins.
Can anyone suggest if I should increase the req rate and try again or can point somewhere that must require fix.
#Component
public class TestLonggersImpl
implements TestSLongger {
#Autowired
YamlReader yamlReader;
#Autowired
TestSCatalog gSCatalog;
#Autowired
ApplicationContext applicationContext;
private static HashMap<String, TestLonggerImpl> gImplHashMap = new HashMap<>();
private static final Longger LONGER = LonggerFactory.getLongger(AbstractSLongger.class);
#PostConstruct
public void init() {
final String[] sts = yamlReader.getTestStreamNames();
for (String st : sts) {
System.out.println(st);
LONGER.info(st);
}
HashMap<String, BSCatalog> statsCatalogHashMap = gSCatalog.getCatalogHashMap();
for (Map.Entry<String, BSCatalog> entry : statsCatalogHashMap.entrySet()) {
BSCatalog bCatalog = statsCatalogHashMap.get(entry.getKey());
//Issue on creating the basicCategory
SProperties sProperties = yamlReader.getTestMap().get(entry.getKey());
Category category = new BasicCategory(sProperties.getSDefinitions(),
bCatalog.getVersion(),
bCatalog.getDescription(), new HashSet<>());
final int version = statsCatalogHashMap.get(entry.getKey()).getVersion();
getTestImplHashMap().put(entry.getKey(),
applicationContext.getBean(TestLonggerImpl.class, category,
entry.getKey(),
version));
}
}
#Override
public void logMessage(String st, String message) {
if (getTestImplHashMap() != null && getTestImplHashMap().get(st) != null) {
getTestImplHashMap().get(st).log(message);
}
}
#VisibleForTesting
static HashMap<String, TestLonggerImpl> getTestImplHashMap() {
return gImplHashMap;
}
}
*** 2nd class
#Component
public class GStatsCatalog {
#Autowired
YamlReader yamlReader;
private static HashMap<String, BStatsCatalog> stCatalogHashMap = new HashMap<>();
#PostConstruct
public void init() {
String[] streams = yamlReader.getGSNames();
for (String stream : streams) {
BStatsCatalog bCatalog = new BStatsCatalog();
SProperties streamProperties = yamlReader.getGMap().get(stream);
bCatalog.setSName(stream);
int version = VERSION;
try {
version = Integer.parseInt(streamProperties.getVersion());
} catch (Exception e) {
System.out.println(e.getMessage());
}
bCatalog.setVersion(version);
bCatalog.setDescription(streamProperties.getDescription());
stCatalogHashMap.put(stream, bCatalog);
}
}
public static HashMap<String, BStatsCatalog> getCatalogHashMap() {
return stCatalogHashMap;
}
public void setYamlReader(YamlReader yamlReader) {
this.yamlReader = yamlReader;
}
}
I think the methods under #postconstruct are threadsafe. It only runs once after the bean created in the whole lifecircle of the bean.
This is my source,how can i print message if some of the members is shut down for some reason?I think i can some event or some kind of action listener but how...
import com.hazelcast.core.*;
import com.hazelcast.config.*;
import java.util.Map;
/** * * #author alvel */ public class ShutDown {
public static void main(String[] args) {
Config cfg = new Config();
HazelcastInstance memberOne = Hazelcast.newHazelcastInstance(cfg);
HazelcastInstance memberTwo = Hazelcast.newHazelcastInstance(cfg);
Map<Integer, String> customerMap = memberOne.getMap("customers");
customerMap.put(1, "google");
customerMap.put(2, "apple");
customerMap.put(3, "yahoo");
customerMap.put(4, "microsoft");
System.out.println("Hazelcast Nodes in this cluster"+Hazelcast.getAllHazelcastInstances().size());
memberOne.shutdown();
System.out.println("Hazelcast Nodes in this cluster After shutdown"+Hazelcast.getAllHazelcastInstances().size());
Map<Integer, String> customerRestored = memberTwo.getMap("customers");
for(String val:customerRestored.values()){
System.out.println("-"+val);
}
} }
Try this, it adds a few lines into your code and a new class
public class ShutDown {
static {
// ONLY TEMPORARY
System.setProperty("hazelcast.logging.type", "none");
}
public static void main(String[] args) {
Config cfg = new Config();
HazelcastInstance memberOne = Hazelcast.newHazelcastInstance(cfg);
//ADDED TO MEMBER ONE
memberOne.getCluster().addMembershipListener(new ShutDownMembershipListener());
HazelcastInstance memberTwo = Hazelcast.newHazelcastInstance(cfg);
//ADDED TO MEMBER TWO
memberTwo.getCluster().addMembershipListener(new ShutDownMembershipListener());
Map<Integer, String> customerMap = memberOne.getMap("customers");
customerMap.put(1, "google");
customerMap.put(2, "apple");
customerMap.put(3, "yahoo");
customerMap.put(4, "microsoft");
System.out.println("Hazelcast Nodes in this cluster"+Hazelcast.getAllHazelcastInstances().size());
memberOne.shutdown();
System.out.println("Hazelcast Nodes in this cluster After shutdown"+Hazelcast.getAllHazelcastInstances().size());
Map<Integer, String> customerRestored = memberTwo.getMap("customers");
for(String val:customerRestored.values()){
System.out.println("-"+val);
}
}
static class ShutDownMembershipListener implements MembershipListener {
#Override
public void memberAdded(MembershipEvent membershipEvent) {
System.out.println(this + membershipEvent.toString());
}
#Override
public void memberAttributeChanged(MemberAttributeEvent arg0) {
}
#Override
public void memberRemoved(MembershipEvent membershipEvent) {
System.out.println(this + membershipEvent.toString());
}
}
}
The line System.setProperty("hazelcast.logging.type", "none") is just for testing to make it simpler to see what is happening.
I have an unbound dataflow pipeline that reads from Pub/Sub, applies a ParDo and writes to Cassandra. It applies only ParDo transformations so I am using the default global window with the default triggering even though the source is unbound.
In a pipeline like that how should I keep the connection to Cassandra?
Currently I am keeping it in startBundle like this:
private class CassandraWriter <T> extends DoFn<T, Void> {
private transient Cluster cluster;
private transient Session session;
private transient MappingManager mappingManager;
#Override
public void startBundle(Context c) {
this.cluster = Cluster.builder()
.addContactPoints(hosts)
.withPort(port)
.withoutMetrics()
.withoutJMXReporting()
.build();
this.session = cluster.connect(keyspace);
this.mappingManager = new MappingManager(session);
}
#Override
public void processElement(ProcessContext c) throws IOException {
T element = c.element();
Mapper<T> mapper = (Mapper<T>) mappingManager.mapper(element.getClass());
mapper.save(element);
}
#Override
public void finishBundle(Context c) throws IOException {
session.close();
cluster.close();
}
}
However, this way a new connection is created for every element.
Another option is to pass it as a side input like in https://github.com/benjumanji/cassandra-dataflow:
public PDone apply(PCollection<T> input) {
Pipeline p = input.getPipeline();
CassandraWriteOperation<T> op = new CassandraWriteOperation<T>(this);
Coder<CassandraWriteOperation<T>> coder =
(Coder<CassandraWriteOperation<T>>)SerializableCoder.of(op.getClass());
PCollection<CassandraWriteOperation<T>> opSingleton =
p.apply(Create.<CassandraWriteOperation<T>>of(op)).setCoder(coder);
final PCollectionView<CassandraWriteOperation<T>> opSingletonView =
opSingleton.apply(View.<CassandraWriteOperation<T>>asSingleton());
PCollection<Void> results = input.apply(ParDo.of(new DoFn<T, Void>() {
#Override
public void processElement(ProcessContext c) throws Exception {
// use the side input here
}
}).withSideInputs(opSingletonView));
PCollectionView<Iterable<Void>> voidView = results.apply(View.<Void>asIterable());
opSingleton.apply(ParDo.of(new DoFn<CassandraWriteOperation<T>, Void>() {
private static final long serialVersionUID = 0;
#Override
public void processElement(ProcessContext c) {
CassandraWriteOperation<T> op = c.element();
op.finalize();
}
}).withSideInputs(voidView));
return new PDone();
}
However this way I have to use windowing since PCollectionView<Iterable<Void>> voidView = results.apply(View.<Void>asIterable()); applies a group by.
In general, how should a PTransform that writes from an unbounded PCollection to an external database keep its connection to the database?
You are correctly observing that a typical bundle size in the streaming/unbounded case is smaller compared to the batch/bounded case. The actual bundle size depends on many parameters, and sometimes bundles may contain a single element.
One way of solving this problem would be to use a pool of connections per worker, stored in a static state of your DoFn. You should be able to initialize it during the first call to startBundle, and use it across bundles. Alternatively, you can create a connection on demand and release it to the pool for reuse when no longer necessary.
You should make sure the static static is thread-safe, and that you aren't making any assumptions how Dataflow manages bundles.
As Davor Bonaci suggested, using a static variable solved the problem.
public class CassandraWriter<T> extends DoFn<T, Void> {
private static final Logger log = LoggerFactory.getLogger(CassandraWriter.class);
// Prevent multiple threads from creating multiple cluster connection in parallel.
private static transient final Object lock = new Object();
private static transient Cluster cluster;
private static transient Session session;
private static transient MappingManager mappingManager;
private final String[] hosts;
private final int port;
private final String keyspace;
public CassandraWriter(String[] hosts, int port, String keyspace) {
this.hosts = hosts;
this.port = port;
this.keyspace = keyspace;
}
#Override
public void startBundle(Context c) {
synchronized (lock) {
if (cluster == null) {
cluster = Cluster.builder()
.addContactPoints(hosts)
.withPort(port)
.withoutMetrics()
.withoutJMXReporting()
.build();
session = cluster.connect(keyspace);
mappingManager = new MappingManager(session);
}
}
}
#Override
public void processElement(ProcessContext c) throws IOException {
T element = c.element();
Mapper<T> mapper = (Mapper<T>) mappingManager.mapper(element.getClass());
mapper.save(element);
}
}
It seems that the MapStore (write-behind mode) does not work properly when replacing some map items.
I expected that map items which have been replaced, are not processed anymore.
I am using Hazelcast 3.2.5
Did i miss something here?
Please see Server Test Class, Client Test Class and Output as well to demonstrate the problem.
Server Class:
public class HazelcastInstanceTest {
public static void main(String[] args) {
Config cfg = new Config();
MapConfig mapConfig = new MapConfig();
MapStoreConfig mapStoreConfig = new MapStoreConfig();
mapStoreConfig.setEnabled(true);
mapStoreConfig.setClassName("com.test.TestMapStore");
mapStoreConfig.setWriteDelaySeconds(15);
mapConfig.setMapStoreConfig(mapStoreConfig);
mapConfig.setName("customers");
cfg.addMapConfig(mapConfig);
HazelcastInstance instance = Hazelcast.newHazelcastInstance(cfg);
}
}
MapStore Impl Class
public class TestMapStore implements MapStore {
#Override
public Object load(Object arg0) {
System.out.println("--> LOAD");
return null;
}
#Override
public Map loadAll(Collection arg0) {
System.out.println("--> LOAD ALL");
return null;
}
#Override
public Set loadAllKeys() {
System.out.println("--> LOAD ALL KEYS");
return null;
}
#Override
public void delete(Object arg0) {
System.out.println("--> DELETE");
}
#Override
public void deleteAll(Collection arg0) {
System.out.println("--> DELETE ALL");
}
#Override
public void store(Object arg0, Object arg1) {
System.out.println("--> STORE " + arg1.toString());
}
#Override
public void storeAll(Map arg0) {
System.out.println("--> STORE ALL");
}
}
Client Class
public class HazelcastClientTest {
public static void main(String[] args) throws Exception {
ClientConfig clientConfig = new ClientConfig();
HazelcastInstance client = HazelcastClient.newHazelcastClient(clientConfig);
IMap mapCustomers = client.getMap("customers");
System.out.println("Map Size:" + mapCustomers.size());
mapCustomers.put(1, "Item A");
mapCustomers.replace(1, "Item B");
mapCustomers.replace(1, "Item C");
System.out.println("Map Size:" + mapCustomers.size());
}
}
Client Output (which is ok):
Map Size:0
Map Size:1
Server Output (which is not ok, as i suppose. I expected only item C)
--> LOAD ALL KEYS
--> LOAD
--> STORE Item A
--> STORE ALL
--> STORE Item B
--> STORE Item C
Any help is appreciate.
Many thanks