I have a requirement to have data In memory and distributed across nodes, I could see Hazelcast and Apache Ignite support JCache and Key value pairs. But distributed by its own algo (like Hashing)
My requirement is data(element) should be sorted by timestamp(One of the fields in the Java Data object) and partitioned in Heap as a List (like Distributed Linked List)
Ex: Let's say we have 4 Nodes.
List 1 on Node 1 -> element(1), element(2), element(3).
List 2 on Node 2 -> element(4), element(5), element(6).
List 3 on Node 3 -> element(7), element(8), element(9).
List 4 on Node 4 -> element(10), element(11), element(12). ```
element (n) transaction time < element (n+1) transaction time
The goal is to run Algo in memory on each node on the local data without network call.
For Hazelcast, you probably want near-cache.
This lets the system distribute the data the way it should, but each node can keep a local copy of the data it is using.
You can override the distribution algorithm if you wish certain pieces of data to be kept together. However, trying to control where that is stops a distributed system from rebalancing the data to even out load.
In addition to Neil's near-cache advice, you should also look into the Distributed Computing section within the Finding the Right tool chapter in Hazelcast documentation. There are 3 ways to proceed:
Hazelcast Jet stream & batch engine - your pipelines (jobs) can also process data locally;
ExecutorService - allows you to execute your code on cluster members;
EntryProcessor - allows your code to process IMap entries locally on members.
Related
I have a cql table which has 2 columns
{
long minuteTimeStamp -> only minute part of epoch time. seconds are ignored.
String data -> some data
}
I have a 5 node cassandra cluster and I want to distribute per minute data uniformly on all 5 nodes. So if per minute data is ~10k records, so each node should consume ~2k data.
I also want to consume each minute data parallelly, means 5 different readers read data 1 on each node.
I came to one solution like I also keep one more column in table like
{
long minuteTimeStamp
int shardIdx
String data
partition key : (minuteTimeStamp,shardIdx)
}
By doing this while writing the data, I will do circular round-robin on shardIdx. Since cassandra uses vnodes, so it might be possible that (min0,0) goes to node0, and (min0,1) also goes to node0 only as this token might also belong to node0. This way I can create some hotspots and it will also hamper read, as 5 parallel readers who wanted to read 1 on each node, but more than one reader might land to same node.
How can we design our partition-key so that data is uniformly distributed without writing a custom partitioner ?
There's no need to make the data distribution more complex by sharding.
The default Murmur3Partitioner will distribute your data evenly across nodes as you approach hundreds of thousands of partitions.
If your use case is really going to hotspot on "data 1", then that's more an inherent problem with your use case/access pattern but it's rare in practice unless you have a super-node issue (for example) in a social graph use case where you have Taylor Swift or Barack Obama having millions more followers than everyone else. Cheers!
I'm trying to create a sort of a consumer group as it exist in Kafka but for Cassandra. The goal is to have a request been paginated and each page done by one instance of an App.
Is there any notion like the consumer group one in Cassandra ?
The TL;DR; is that no, the consumer-group notion doesn't exist in the clients in Cassandra. The burden of which client processes what is entirely on the app developer.
You can use Cassandra's tokens to do selective paging.
Assuming 2 clients (easy example)
Client 1 pages from -2^63 to 0
Client 2 pages from 1 to 2^63 - 1
The above idea assumes you want to page through all the data in something similar to a batch process which wouldn't be a good fit for Cassandra.
If you're after the latest N results, where the 1st half is sent to client 1 and the second to client 2 you can use a logical bucket in your partitioning key.
If you're looking to scale the processing of a large number of Cassandra rows, you might consider a scalable execution platform like Flink or Storm. You'd be able to parallelize both the reading of the rows and the processing of the rows, although a parallelizable source (or spout in Storm) is not something you can get out of the box.
I have some flows that get the data from an azure eventhub, im using the GetAzureEventhub processor. The data that im getting is being multiplyed by the number of nodes that I have in the cluster, I have 4 nodes. If I indicate to the processor to just run on the primary node, the data is not replicated 4 times.
I found that the eventhub for each consumer group accepts up to 5 readers, I read this in this article, each reader will have its own separate offset and they consume the same data. So in conclussion Im reading the same data 4 times.
I have 2 questions:
How can I coordinate this 4 nodes in order to go throught the same reader?
In case this is not posible, how can indicate nifi to just one of the nodes to read?
Thanks, if you need any clarification, ask for it.
GetAzureEventHub currently does not perform any coordination across nodes so you would have to run it on primary node only to avoid duplication.
The processor would require refactoring to perform coordination across the nodes of the cluster and assign unique partitions to each node, and handle failures (i.e. if a node consuming partition 1 goes down, another node has to take over partition 1).
If the Azure client provided this coordination somehow (similar to the Kafka client) then it would require less work on the NiFi side, but I'm not familiar enough with Azure to know if it provides anything like this.
I am working in a specific project to change my repository to hazelcast.
I need find some documents by data range, store type and store ids.
During my tests i got 90k throughput using one instance c3.large, but when i execute the same test with more instances the result decrease significantly (10 instances 500k and 20 instances 700k).
These numbers were the best i could tuning some properties:
hazelcast.query.predicate.parallel.evaluation
hazelcast.operation.generic.thread.count
hz:query
I have tried to change instance to c3.2xlarge to get more processing but but the numbers don't justify the price.
How can i optimize hazelcast to be more fast in this scenario?
My user case don't use map.get(key), only map.values(predicate).
Settings:
Hazelcast 3.7.1
Map as Data Structure;
Complex object using IdentifiedDataSerializable;
Map index configured;
Only 2000 documents on map;
Hazelcast embedded configured by Spring Boot Application (singleton);
All instances in same region.
Test
Gatling
New Relic as service monitor.
Any help is welcome. Thanks.
If your use-case only contains map.values with a predicate, I would strongly suggest to use object type as in in-memory storage model. This way, there will not be any serialization involved during Query execution.
On the other end, it is normal to get very high numbers when you only have 1 member. Because, there is no data moving across network. Potentially to improve, I would check EC2 instances with high network capacity. For example c3.8xlarge has 10 Gbit network, compared to High that comes with c3.2xlarge.
I can't promise, how much increase you can get, but I would definitely try these changes first.
Does Hazelcast have Replication similar to Ehcache ?
http://www.ehcache.org/generated/2.9.0/pdf/Ehcache_Replication_Guide.pdf
I've found only Distributed, but not replicated.
If you have N hazelcast instances clustered together (N nodes), then your data will be partitioned in N parts. Each part will be owned by one node. Based on what is the configured replication factor ( default 3) each part will also be replicated in some other nodes. So each node will have a part that it own, and some other parts which are just replica.
See the ReplicatedMap (http://docs.hazelcast.org/docs/3.6/manual/html-single/index.html#replicated-map) which is best effort replicated and therefore pretty much the equivalent to the replication in EHcache.
From my experience, aside from the replicated data structures, there is a way of dynamic replication according to your needs. This is not an official way, just a workaround. You can use backups to make your life easier.
Example:
I have an IMap and I want it in all my nodes (I don't know why you would want that, but let's say you do). I can do 2 steps.
backup-count = (NumberOfNodes - 1)
read-backup-data = true
So now all your nodes have a copy of the data and since you can read from the backups it's like a replicated map.
You can play with the backup-count or with the async-backup-count depending on the consistency level you want. Click here for more details.
Hazelcast's replicated maps are far superior to EHcache's, they will replicate the entire map across every node (JVM) in your cluster.
On top of that you can use a partitioned map which will split your data across all the JVMs but appear to the client as if it is a single map.
This allows you to store much more data than can be stored in a single JVM.
A good rule of thumb for sizing the heap size your cluster is that you get 1/3 for data, 1/3 for backup of another node (assuming backup count is 1) and 1/3 for JVM processing.