Why is there no built-in HashMapStreamSerializer in Hazelcast? - hazelcast

The Hazelcast documentation provides examples of how we can write our own LinkedListStreamSerializer and HashMapStreamSerializer and it says that support will be added for these in the future.
It looks as though the LinkedListStreamSerializer is in fact supported now, which is great, but not the HashMap one.
I'm wondering if there is any reason why not and should I be concerned about continuing to use the example one from the documentation.

You should be fine with the HashMapStreamSerializer.
It's now tricky to add a new serializer into Hazelcast due backward compatibility - as older clients wouldn't be able to deserialize blobs serialized with the new serializer.

Related

Hazelcast JCache predicate

Previously I used Hazelcast Java API and was able to do predicate to filter on result return.
Currently, as we are moving towards portability of IMDG, I used JCache API. However, i did not manage to find anything related that will allow me to do predicate/filtering/searching on IMDG cache.
Have anyone done similar thing before?
JCache does not have any Query API. If you used the IMap you would get the same features as in JCache plus Querying.

Configuring Distributed Objects Dynamically

I'm currently evaluating using Hazelcast for our software. Would be glad if you could help me elucidate the following.
I have one specific requirement: I want to be able to configure distributed objects (say maps, queues, etc.) dynamically. That is, I can't have all the configuration data at hand when I start the cluster. I want to be able to initialise (and dispose) services on-demand, and their configuration possibly to change in-between.
The version I'm evaluating is 3.6.2.
The documentation I have available (Reference Manual, Deployment Guide, as well as the "Mastering Hazelcast" e-book) are very skimpy on details w.r.t. this subject, and even partially contradicting.
So, to clarify an intended usage: I want to start the cluster; then, at some point, create, say, a distributed map structure, use it across the nodes; then dispose it and use a map with a different configuration (say, number of backups, eviction policy) for the same purposes.
The documentation mentions, and this is to be expected, that bad things will happen if nodes have different configurations for the same distributed object. That makes perfect sense and is fine; I can ensure that the configs will be consistent.
Looking at the code, it would seem to be possible to do what I intend: when creating a distributed object, if it doesn't already have a proxy, the HazelcastInstance will go look at its Config to create a new one and store it in its local list of proxies. When that object is destroyed, its proxy is removed from the list. On the next invocation, it would go reload from the Config. Furthermore, that config is writeable, so if it has been changed in-between, it should pick up those changes.
So this would seem like it should work, but given how silent the documentation is on the matter, I'd like some confirmation.
Is there any reason why the above shouldn't work?
If it should work, is there any reason not to do the above? For instance, are there plans to change the code in future releases in a way that would prevent this from working?
If so, is there any alternative?
Changing the configuration on the fly on an already created Distributed object is not possible with the current version though there is a plan to add this feature in future release. Once created the map configs would stay at node level not at cluster level.
As long as you are creating the Distributed map fresh from the config, using it and destroying it, your approach should work without any issues.

Mongooplog alternative

As we all know that mongooplog tool is going to be removed in upcoming releases. I needed help about some the following issue:
I was planning to create a listener using mongooplog which will read any kind of activity on mongodb and will generate a trigger according to activity which will hit another server. Now, since mongooplog is going out, can anyone suggest what alternative can I use in this case and how to use it.
I got this warning when trying to use mongooplog. Please let me know if you any further questions.
warning: mongooplog is deprecated, and will be removed completely in a future release
PS: I am using node.js framework to implement the listener. I have not written any code yet so have no code to share.
The deprecation message you are quoting only refers to the mongooplog command-line tool, not the general approach of tailing the oplog. The mongooplog tool can be used for some types of data migrations, but isn't the right approach for a general purpose listener or to wrap in your Node.js application.
You should continue to create a tailable cursor to follow oplog activity. Tailable cursors are supported directly by the MongoDB drivers. For an example using Node.js see: The MongoDB Oplog & Node.js.
You may also want to watch/upvote SERVER-13932: Change Notification Stream API in the MongoDB issue tracker, which is a feature suggestion for a formal API (rather than relying on the internal oplog format used by replication).

datastax driver vs spring-data-cassandra

Hey I am new to Cassandra and I am friendly with Spring jdbc-template.
Can anyone please explain difference between both of them? Also can you suggest which one is good to use ?
thanks.
spring-data-cassandra uses datastax's java-driver, so the decision to be made is really whether or not you need the functionality of spring-data.
Some features from spring data that may be useful for you (documented here):
spring xml configuration for configuring your Cluster instance (especially useful if you are already using spring).
object mapping component.
The java-driver also has a mapping component as well that is worth exploring.
In my opinion if you are already using spring, it is worth looking into spring-data-cassandra. Otherwise, it would be good to start off with just the datastax java-driver.

Multi-Version Concurrency Control and CQRS and DDD

In order to support offline clients, I want to evaluate how to fit Multi-Version Concurrency Control with a CQRS-DDD system.
Learning from CouchDB I felt tempted to provide each Entity with a version field. However there are other version concurrency algorithms like vector clocks. This made me think that maybe, I should just not expose this version concept for each Entity and/or Event.
Unfortunately most of the implementations I have seen are based on the assumption that the software runs on a single server, where the timestamps for the events come from one reliable source. However if some events are generated remotely AND offline, there is the problem with the local client clock offset. In that case, a normal timestamp does not seem a reliable source for ordering my events.
Does this force me to evaluate some form of MVCC solution not based on timestamps?
What implementation details must an offline-CQRS client evaluate to synchronize a delayed chain of events with a central server?
Is there any good opensource example?
Should my DDD Entities and/or CQRS Query DTOs provide a version parameter?
I manage a version number and it has worked out well for me. The nice thing about the version number is that you can make your code very explicit when dealing with concurrency conflicts. My approach is to ensure that my DTO's all have the version number of the aggregate they are associated with. When I send in a command it has the current version as seen on the client. This number may or may not be in sync with the actual version of the aggregate, ie. the client has been offline. Before the event is persisted I check the version number is the one I expected and if not I do a check against the preceding events to see any of them actually conflict. Only then if they do, do I raise an exception. This is essentially a very fine grained form of optimistic concurrency. If your interested I've written more detail including some code samples on my blog at: http://danielwhittaker.me/2014/09/29/handling-concurrency-issues-cqrs-event-sourced-system/
I hope that helps.
I suggest you to have a look at Greg's presentation on the subject. It might have answers you're looking for https://skillsmatter.com/skillscasts/1980-cqrs-not-just-for-server-systems
I guess you should rethink your domain, separate remote client logic in its own bounded context and integrate it with the other BC using the known principles of DDD for BC interop.

Resources