Limit amount of Map in hazelcast - hazelcast

I am making a game and will use hazelcast to save player data, should I save each player's data as a Map? Because if I save player's data as a map element, so with every small change like increasing the gold value, I have to put all of the data of the player:
ex:
playerData.gold = newValue;
players.replace(palyerID, playerData);
but if I make a player data as a Map, so I will just put new gold value. ex:
playerA.replace("gold", newGoldValue)
But I affair that creates many maps is not good (in case I have more than 1 million players). Can I create as many maps as I want? if not, how many maps can I create?

Hoang
I just answered the same question in our community channel:
You can use EntryProcessor to read or update only a single for some of the properties of the player data. I dont know how many users you’ll have but in case you have 100k user, for example, creating 100K map wont help you, but you can have 100K or tens of millions or more records on a single map. Please check: http://docs.hazelcast.org/docs/3.10.4/manual/html-single/index.html#entry-processor and https://github.com/hazelcast/hazelcast-code-samples/tree/master/distributed-map/entry-processor
On the other hand, if each user data > 5-10 MB & if you need to access all of that data frequently, then splitting it is a good idea.
Please note that these are just general suggestions. You need to test different setups & find the most suitable & performant one for your use case.

Related

Usage of Redis for very large memory cache

I am planning to consider Redis for storing large amount of data in cache. Currently I store them in my own cache written in java. My use case is below.
I get 15 minutes data from a source and i need to aggregate the data hourly. So for a given object A every hour I will get 4 values and I need to aggregate them to one value the formula I will use will max / min / sum.
Foe making key I plan to use like below
a) object id - long
b) time - long
c) property id - int (each object may have many property which I need to aggregate for each property separately)
So final key would look like;
objectid_time_propertyid
Every 15 minutes I may get around 50 to 60 Million keys , I need to fetch these keys every time convert the property value to double and apply the formula (max/min/sum etc.) then convert back to String and store back.
So I see for every key I have one read and one write and conversion in each case.
My questions are following.
Is is advisable to use redis for such use case , going forward I may aggregate hourly data to daily , daily to weekly and so on.
What would be performance of read and writes in cache (I did a sample test on Windows and 100K keys read and write took 30-40 seconds thats not great , but I did on windows and I finally need to run on linux.
I want to use persistence function of redis, what are pros and cons of it ?
If any one has real experience in usage of redis as memcache which requires frequent updation please give a suggestion.
Is is advisable to use redis for such use case , going forward I may aggregate hourly data to daily , daily to weekly and so on.
Advisable depends on who you ask, but I certainly feel Redis will be up to the job. If a single server isn't enough, your description suggests that the dataset can be easily sharded so a cluster will let you scale.
I would advise, however, that you store your data a little differently. First, every key in Redis has an overhead so the more of these, the more RAM you'll need. Therefore, instead of keeping a key per object-time-property, I recommend Hashes as a means for aggregating some values together. For example, you could use an object_id:timestamp key and store the property_id:value pairs under it.
Furthermore, instead of keeping the 4 discrete measurements for each object-property by timestamp and recomputing your aggregates, I suggest you keep just the aggregates and update these with new measurements. So, you'd basically have an object_id Hash, with the following structure:
object_id:hourtimestamp -> property_id1:max = x
property_id1:min = y
property id1:sum = z
When getting new data - d - for an object's property, just recompute the aggregates:
property_id1:max = max(x, d)
property_id1:min = min(y, d)
property_id1:sum = z + d
Repeat the same for every resolution needed, e.g. use object_id:daytimestamp to keep day-level aggregates.
Finally, don't forget expiring your keys after they are no longer required (i.e. set a 24 hours TTL for the hourly counters and so forth).
There are other possible approaches, mainly using Sorted Sets, that can be applicable to solve your querying needs (remember that storing the data is the easy part - getting it back is usually harder ;)).
What would be performance of read and writes in cache (I did a sample test on Windows and 100K keys read and write took 30-40 seconds thats not great , but I did on windows and I finally need to run on linux.
Redis, when running on my laptop on Linux in a VM, does an excess of 500K reads and writes per second. Performance is very dependent on how you use Redis' data types and API. Given your throughput of 60 million values over 15 minutes, or ~70K/sec writes of smallish data, Redis is more than equipped to handle that.
I want to use persistence function of redis, what are pros and cons of it ?
This is an extremely-well documented subject - please refer to http://redis.io/topics/persistence and http://oldblog.antirez.com/post/redis-persistence-demystified.html for starters.

Cassandra - multiple counters based on timeframe

I am building an application and using Cassandra as my datastore. In the app, I need to track event counts per user, per event source, and need to query the counts for different windows of time. For example, some possible queries could be:
Get all events for user A for the last week.
Get all events for all users for yesterday where the event source is source S.
Get all events for the last month.
Low latency reads are my biggest concern here. From my research, the best way I can think to implement this is a different counter tables for each each permutation of source, user, and predefined time. For example, create a count_by_source_and_user table, where the partition key is a combination of source and user ID, and then create a count_by_user table for just the user counts.
This seems messy. What's the best way to do this, or could you point towards some good examples of modeling these types of problems in Cassandra?
You are right. If latency is your main concern, and it should be if you have already chosen Cassandra, you need to create a table for each of your queries. This is the recommended way to use Cassandra: optimize for read and don't worry about redundant storage. And since within every table data is stored sequentially according to the index, then you cannot index a table in more than one way (as you would with a relational DB). I hope this helps. Look for the "Data Modeling" presentation that is usually given in "Cassandra Day" events. You may find it on "Planet Cassandra" or John Haddad's blog.

Cassandra count use case

I'm trying to figure out an appropriate use case for Casandra's counter functionality. I thought of a situation and I was wondering if this would be feasible. I'm not quite sure because I'm still experimenting with Cassandra so any advice would be appreciated.
Lets say you had a small video service, you record the log of views in Cassandra while recording what video was played, which user played it, country, referer etc. You obviously want to show a count of how many times that video was played would incrementing a counter every time you insert a play event be a good solution to this? Or would there be a better alternative. Counting all the events on read every time would take a pretty big performance hit and even if you cached the results the cache would be invalidated pretty quickly if you had a busy site.
Any advice would be appreciated!
Counters can be used for whatever you need to count within an application -- both "frontend" data and "backend" one. I personally use them to store user's behaviour information (for backend analysis) and frontend ratings (each operation a user do in my platform give to the user some points). There is no real limitation on use case -- the limitation is given by few technical limitations, the bigger coming to my mind:
a counter cf can be made only by counters columns (except PK, obviously)
counters can't be reset: to set 0 value to a counter you need to read and calculate before writing (with no guarantee about the fact that someone else updated before you)
no ttl and no indexing/deletion
As far as your video service it all depends on how you choose to model data -- if you find a valid model to hit few partitions each time you write/read and you have a good key distribution I don't see any real problem in its implementation.
btw: you tagged Cassandra 2.0 but if you have to use counters you should think about 2.1 for the reasons described here

Redis key design for real-time stock application

I am trying to build a real-time stock application.
Every seconds I can get some data from web service like below:
[{"amount":"20","date":1386832664,"price":"183.8","tid":5354831,"type":"sell"},{"amount":"22","date":1386832664,"price":"183.61","tid":5354833,"type":"buy"}]
tid is the ticket ID for stock buying and selling;
date is the second from 1970.1.1;
price/amount is at what price and how many stock traded.
Reuirement
My requirement is show user highest/lowest price at every minute/5 minutes/hour/day in real-time; show user the sum of amount in every minute/5 minutes/hour/day in real-time.
Question
My question is how to store the data to redis, so that I can easily and quickly get highest/lowest trade from DB for different periods.
My design is something like below:
[date]:[tid]:amount
[date]:[tid]:price
[date]:[tid]:type
I am new in redis. If the design is this is that means I need to use sorted set, will there any performance issue? Or is there any other way to get highest/lowest price for different periods.
Looking forward for your suggestion and design.
My suggestion is to store min/max/total for all intervals you are interested in and update it for current ones with every arriving data point. To avoid network latency when reading previous data for comparison, you can do it entirely inside Redis server using Lua scripting.
One key per data point (or, even worse, per data point field) is going to consume too much memory. For the best results, you should group it into small lists/hashes (see http://redis.io/topics/memory-optimization). Redis only allows one level of nesting in its data structures: if you data has multiple fields and you want to store more than one item per key, you need to somehow encode it yourself. Fortunately, standard Redis Lua environment includes msgpack support which is very a efficient binary JSON-like format. JSON entries in your example encoded with msgpack "as is" will be 52-53 bytes long. I suggest grouping by time so that you have 100-1000 entries per key. Suppose one-minute interval fits this requirement. Then the keying scheme would be like this:
YYmmddHHMMSS — a hash from tid to msgpack-encoded data points for the given minute.
5m:YYmmddHHMM, 1h:YYmmddHH, 1d:YYmmdd — window data hashes which contain min, max, sum fields.
Let's look at a sample Lua script that will accept one data point and update all keys as necessary. Due to the way Redis scripting works we need to explicitly pass the names of all keys that will be accessed by the script, i.e. the live data and all three window keys. Redis Lua has also JSON parsing library available, so for the sake of simplicity let's assume we just pass it JSON dictionary. That means that we have to parse data twice: on the application side and on the Redis side, but the performance effects of it are not clear.
local function update_window(winkey, price, amount)
local windata = redis.call('HGETALL', winkey)
if price > tonumber(windata.max or 0) then
redis.call('HSET', winkey, 'max', price)
end
if price < tonumber(windata.min or 1e12) then
redis.call('HSET', winkey, 'min', price)
end
redis.call('HSET', winkey, 'sum', (windata.sum or 0) + amount)
end
local currkey, fiveminkey, hourkey, daykey = unpack(KEYS)
local data = cjson.decode(ARGV[1])
local packed = cmsgpack.pack(data)
local tid = data.tid
redis.call('HSET', currkey, tid, packed)
local price = tonumber(data.price)
local amount = tonumber(data.amount)
update_window(fiveminkey, price, amount)
update_window(hourkey, price, amount)
update_window(daykey, price, amount)
This setup can do thousands of updates per second, not very hungry on memory, and window data can be retrieved instantly.
UPDATE: On the memory part, 50-60 bytes per point is still a lot if you want to store more a few millions. With this kind of data I think you can get as low as 2-3 bytes per point using custom binary format, delta encoding, and subsequent compression of chunks using something like snappy. It depends on your requirements, whether it's worth doing this.

update 40+ million entities in azure table with many instances how to handle concurrency issues

So here is the problem. I need to update about 40 million entities in an azure table. Doing this with a single instance (select -> delete original -> insert with new partitionkey) will take until about Christmas.
My thought is use an azure worker role with many instances running. The problem here is the query grabs the top 1000 records. That's fine with one instance but with 20 running their selects will obviously overlap.. a lot. this would result in a lot of wasted compute trying to delete records that were already deleted by another instance and updating a record that has already been updated.
I've run through a few ideas, but the best option I have is to have the roles fill up a queue with partition and row keys then have the workers dequeue and do the actual processing?
Any better ideas?
Very interesting question!!! Extending #Brian Reischl's answer (and a lot of it is thinking out loud, so please bear with me :))
Assumptions:
Your entities are serializable in some shape or form. I would assume that you'll get raw data in XML format.
You have one separate worker role which is doing all the reading of entities.
You know how many worker roles would be needed to write modified entities. For the sake of argument, let's assume it is 20 as you mentioned.
Possible Solution:
First you will create 20 blob containers. Let's name them container-00, container-01, ... container-19.
Then you start reading entities - 1000 at a time. Since you're getting raw data in XML format out of table storage, you create an XML file and store those 1000 entities in container-00. You fetch next set of entities and save them in XML format in container-01 and so on and so forth till the time you hit container-19. Then the next set of entities go into container-00. This way you're evenly distributing your entities across all the 20 containers.
Once all the entities are written, your worker role for processing these entities would come into picture. Since we know that instances in Windows Azure are sequentially ordered, you get instance names like WorkerRole_IN_0, WorkerRole_IN_1, ... and so on.
What you would do is take the instance name, get the number "0", "1" etc. Based on this you would determine which worker role instance will read from which blob container...WorkerRole_IN_0 will read files from container-00, WorkerRole_IN_1 will read files from container-01 and so on.
Now your individual worker role instance will read the XML file, create the entities from that XML file, update those entities and save it back into table storage. Once this process is done, you would then delete the XML file and you move on to next file in that container. Once all files are read and processed, you can just delete the container.
As I said earlier, this is a lot "thinking out loud" kind of solution and some things must be considered like what happens when "reader" worker role goes down and other things.
If your PartitionKeys and/or RowKeys fall into a known range, you could attempt to divide them into disjoint sets of roughly equal size for each worker to handle. eg, Worker1 handles keys starting with 'A' through 'C', Worker2 handles keys starting with 'D' through 'F', etc.
If that's not feasible, then your queuing solution would probably work. But again, I would suggest that each queue message represent a range of keys if possible. eg, a single queue message specifies deleting everything in the range 'A' through 'C', or something like that.
In any case, if you have multiple entities in the same PartitionKey then use batch transactions to your advantage for both inserting and deleting. That could cut down the number of transactions by almost a factor of ten in the best case. You should also use parallelism within each worker role. Ideally use the async methods (either Begin/End or *Async) to do the writing, and run several transactions (12 is probably a good number) in parallel. You can also run multiple threads, but that's somewhat less efficient. In either case, a single worker can push a lot of transactions with table storage.
As a side note, your process should go "Select -> Insert New -> Delete Old". Going "Select -> Delete Old -> Insert New" could result in permanent data loss if a failure occurs between steps 2 & 3.
I think you should mark your question as the answer ;) I cant think of a better solution since I don't know what your partition and row keys look like. But to enhance your solution, you may choose to pump multiple partition/row keys into each queue message to save on transaction cost. Also when consuming from the queue, get them in batches of 32. Process asynchronously. I was able to transfer 170 million records from SQL server (Azure) to Table storage in less than a day.

Resources