I want to store JSON DATA in Redis How I can do that what is the best option to store JSON in Redis. my object will look like
{"name":"b123.home.group.title", "value":"Hellow World", "locale":"en-us", "uid":"b456"}
I want to update the object based on value and locale. also want to get this object with any condition. also want TTL support here (that I can remove if not required)
So what is the best way to store this data in Redis without any Memory issue and support all operations with less time?
Related
we have a map of custom object key to custom value Object(complex Object). We set the in-memory-format as OBJECT. But IMap.get is taking more time to get the value when the retrieved object size is big. We cannot afford latency here and this is required for further processing. IMap.get is called in jvm where cluster is started. Do we have a way to get the objects quickly irrespective of its size?
This is partly the price you pay for in-memory-format==OBJECT
To confirm, try in-memory-format==BINARY and compare the difference.
Store and retrieve are slower with OBJECT, some queries will be faster. If you run enough of those queries the penalty is justified.
If you do get(X) and the value is stored deserialized (OBJECT), the following sequence occurs
1 - the object it serialized from object to byte[]
2 - the byte array is sent to the caller, possibly across the network
3 - the object is deserialized by the caller, byte[] to object.
If you change to store serialized (BINARY), step 1 isn't need.
If the caller is the same process, step 2 isn't needed.
If you can, it's worth upgrading (latest is 5.1.3) as there are some newer options that may perform better. See this blog post explaining.
You also don't necessarily have to return the entire object to the caller. A read-only EntryProcessor can extract part of the data you need to return across the network. A smaller network packet will help, but if the cost is in the serialization then the difference may not be remarkable.
If you're retrieving a non-local map entry (either because you're using client-server deployment model, or an embedded deployment with multiple nodes so that some retrievals are remote), then a retrieval is going to require moving data across the network. There is no way to move data across the network that isn't affected by object size; so the solution is to find a way to make the objects more compact.
You don't mention what serialization method you're using, but the default Java serialization is horribly inefficient ... any other option would be an improvement. If your code is all Java, IdentifiedDataSerializable is the most performant. See the following blog for some numbers:
https://hazelcast.com/blog/comparing-serialization-options/
Also, if your data is stored in BINARY format, then it's stored in serialized form (whatever serialization option you've chosen), so at retrieval time the data is ready to be put on the wire. By storing in OBJECT form, you'll have to perform the serialization at retrieval time. This will make your GET operation slower. The trade-off is that if you're doing server-side compute (using the distributed executor service, EntryProcessors, or Jet pipelines), the server-side compute is faster if the data is in OBJECT format because it doesn't have to deserialize the data to access the data fields. So if you aren't using those server-side compute capabilities, you're better off with BINARY storage format.
Finally, if your objects are large, do you really need to be retrieving the entire object? Using the SQL API, you can do a SELECT of just certain fields in the object, rather than retrieving the entire object. (You can also do this with Projections and the older Predicate API but the SQL method is the preferred way to do this). If the client code doesn't need the entire object, selecting certain fields can save network bandwidth on the object transfer.
I have an JSON object the has the size of about 350kb a list of items about 1500 item , I don't want to keep it on the front end but yet I don't want to use database either can I store it in node.js and call it from there each time the data are needed ? I have no idea if this can be considered a bad practice what do you think ?
I have a nodeJS application that operates on user data in the form of JSON. The size of the JSON data is variable and depends on the user. Usually the size is around 30KB.
Every time any value of the the JSON parameter changes, I recalculate the JSON object, stringify it and encrypt the string using RSA encryption.
Calculating the json involves doing the following steps:
From the database get the data required to form the json. This involves querying atleast 4 tables in a nested for loop.
Use object.assign() to combine the data to form larger json object in the same for loop of order 2.
Once the final object is formed stringify and encrypt it using crypto module of nodejs.
So all this is causing my CPU to crash when the data is huge which means large number of iterations and lot of data to encrypt.
We use postgres database, hence I have to recalculate the entire json object even if only a single parameter value changed.
I was wondering if nodejs worker pool could be a solution to this? But I would like to know how the worker threads handles tasks under the hood. Suggestions on an alternate solution to this problem are also welcomed.
I want to store all objects of a class in redis cache and be able to retrive them, as I understand hashmaps are used for storing objects, but they are require a different key to be saved. So I can't save them all under key e.g. "items" and retrieve them by that key. Only way I can do it is something like this:
items.forEach(item => {
redis.hmset(`item${item.id}`, item);
}
But this feels wrong and I have to have a for loop again when I want to get this data. Is there a better solution?
Also there is a problem of associated objects, I can't find anywhere how they are stored and used in redis.
As I understand, you want to save different keys with same prefix
You can use mset to store them
For retrieving the data you use the mget
with your keys as params
In case you still want to use the hmset
Use pipline in the loop
So the call to redis will be only one with the sync action
I am currently developing an online real-time game which uses a global dictionary to store all the game rooms. If a user try to enter a game, the script checks the dictionary looking for an empty room. If no empty room is found, a new room object is added to the dictionary so other logged users can enter a game room.
The problem is that using a global dictionary for such task is not a good idea as pointed in these questions: Are global variables thread safe in flask? How do I share data between requests? and Preserving global state in a flask application
In the answers, it was recommended storing requests shared data in databases or memcached.. In case I wanted to do it using the database way, should I store the entire dict in the database every time it was requested? Is there a better and secure way to do this?
should I store the entire dict in the database every time it was requested
If you use a database (like SQLite), the entire dict should already be in the database. You can then query the database whenever you need information about the game rooms. Do not keep the entire dict with shared data in memory, move all the shared data into the database, remove the dict with shared data from memory, query the database whenever you need shared data and update the database when shared data is changed.
I suggest you try it out. I think you will find the database fast (and secure) enough.
Note that a database also has ACID properties which you can use and rely on. The value of these ACID properties might not be clear at the moment, but that can change the more you use a database.