What is the ideal value size range for redis? Is 100KB too large? - azure

Is there an upper limit to the suggested size of the value stored for a particular key in Redis?
Is 100KB too large?

There are two things that you need to take into consideration when deciding if something is "too big".
Does Redis have support for the size of key/value object that you want to store?
The answer to this question is documented pretty well on the Redis site (https://redis.io/topics/data-types), so I won't go into detail here.
For a given key/value size, what are the consequences I need to be aware of?
This is a much more nuanced answer as it depends heavily on how you are using Redis and what behaviors are acceptable to your application and which ones are not.
For instance, larger key/value sizes can lead to fragmentation of the memory space within your server. If you aren't using all the memory in your Redis server anyway, then this may not be a big deal to you. However, if you need to squeeze all of the memory out of your Redis server you can, then you are now reducing the efficiency of how memory is allocated and you are losing access to some memory that you would otherwise have.
As another example, when you are reading these large key/value entries from Redis, it means you have to transfer more data over the network from the server to the client. Some consequences of this are:
It takes more time to transfer the data, so your client may need to have a higher timeout value configured to allow for this additional transfer time.
Requests made to the server on the same TCP connection can get stuck behind the big transfer and cause other requests to timeout. See here for an example scenario.
Your network buffers used to transfer this data can impact available memory on the client or server, which can aggravate the available memory issues already described around fragmentation.
If these large key/value items are accessed frequently, this magnifies the impacts described above as you are repeatedly transferring this data over and over again.
So, the answer is not a crisp "yes" or "no", but some things that you should consider and possibly test for your expected workload. In general, I do advise our customers to try to stay as small as possible and I have often said to try to stay below 100kb, but I have also seen plenty of customers use Redis with larger values (in the MB range). Sometimes those larger values are no big deal. In other cases, it may not be an issue until months or years later when their application changes in load or behavior.

Is there an upper limit to the suggested size of the value stored for a particular key in Redis?
According to the official docs, the maximum size of key(String) in redis is 512MB.
Is 100KB too large?
It depends on the application and use, for a general purpose applications it should be fine.

Related

Does RAM affect the time taken to sort an array?

I have an array of a 500k to million items to be sorted. Does going with a configuration of increased RAM be beneficial or not, say 8GB to 32GB or above. Im using a node.JS/mongoDB environment.
Adding RAM for an operation like that would only make a difference if you have filled up the available memory with everything that was running on your computer and the OS was swapping data out to disk to make room for your sort operation. Chances are, if that was happening, you would know because your computer would become pretty sluggish.
So, you just need enough memory for the working set of whatever applications you're running and then enough memory to hold the data you are sorting. Adding additional memory beyond that will not make any difference.
If you had an array of a million numbers to be sorted in Javascript, that array would likely take (1,000,000 * 8 bytes per number) + some overhead for a JS data structure = ~8MB. If your array values were larger than 8 bytes, then you'd have to account for that in the calculation, but hopefully you can see that this isn't a ton of memory in a modern computer.
If you have only an 8GB system and you have a lot of services and other things configured in it and are perhaps running a few other applications at the time, then it's possible that by the time you run nodejs, you don't have much free memory. You should be able to look at some system diagnostics to see how much free memory you have. As long as you have some free memory and are not causing the system to do disk swapping, adding more memory will not increase performance of the sort.
Now, if the data is stored in a database and you're doing some major database operation (such as creating a new index), then it's possible that the database may adjust how much memory it can use based on how much memory is available and it might be able to go faster by using more RAM. But, for a Javascript array which is already all in memory and is using a fixed algorithm for the sort, this would not be the case.

Data security during dynamic memory allocation

Several minutes ago, I and my friends solved some algorithmic problems on the leetcode.com and share our solutions. We used high level languages and when new memory allocated by Array.new(128) in Ruby or int[] map = new int[128]; in Java it already filled by zero-like values nil or 0 respectively.
So it's guarantied that high level program have cleared place.
And here I have a question: In C or Assembler program could it happens that new chunk of memory stores data from other process unchanged?
And thus one process get data of another process. And even may be data from another user that worked in system some time ago. Could it be a way information leaked?
Do OS clear a memory before sharing it among processes? and If so is it very expensive to run so many iterations?
Thank you.
UPD: http://www.cplusplus.com/articles/ETqpX9L8/ looks like it need to clear valuable data in "lower-level" languages manually to prevent data leaks to other processes.
Yes, in lower-level languages where memory is not initialized, it could contain valuable stuff from other processes. There have been encryption key leakage attacks done this way by continually allocating memory and scanning it for what looks like useful information.
Security sensitive programs that store passwords or crypto keys, etc should always clear the memory ASAP after use. It's not only to prevent leaks through re-allocated memory, but there are also other attack vectors like RAM dumps that could be used to extract secrets. Always zero or randomize your memory when you are done with it.

store the temporary data in memory array or database

Just wonder if I need to store the temporary data in memory array or database(nosql)?
Is there any performance difference?
Somebody said nodejs is very dangerous to cache data in memory, it may lead to memory leak. And it cannot afford big data.
Your comment welcome
There are a number of options as storage media, but you are probably best profiling your specific application and making a determination. You may find that the overhead of connecting to storage, storing and retrieving data outweighs any improvements in your memory footprint. You might find the opposite - you will need to profile your application to know.
The NPM package look is very useful for this type of thing: https://www.npmjs.org/package/look.

Mongo suffering from a huge number of faults

I'm seeing a huge (~200++) faults/sec number in my mongostat output, though very low lock %:
My Mongo servers are running on m1.large instances on the amazon cloud, so they each have 7.5GB of RAM ::
root:~# free -tm
total used free shared buffers cached
Mem: 7700 7654 45 0 0 6848
Clearly, I do not have enough memory for all the cahing mongo wants to do (which, btw, results in huge CPU usage %, due to disk IO).
I found this document that suggests that in my scenario (high fault, low lock %), I need to "scale out reads" and "more disk IOPS."
I'm looking for advice on how to best achieve this. Namely, there are LOTS of different potential queries executed by my node.js application, and I'm not sure where the bottleneck is happening. Of course, I've tried
db.setProfilingLevel(1);
However, this doesn't help me that much, because the outputted stats just show me slow queries, but I'm having a hard time translating that information into which queries are causing the page faults...
As you can see, this is resulting in a HUGE (nearly 100%) CPU wait time on my PRIMARY mongo server, though the 2x SECONDARY servers are unaffected...
Here's what the Mongo docs have to say about page faults:
Page faults represent the number of times that MongoDB requires data not located in physical memory, and must read from virtual memory. To check for page faults, see the extra_info.page_faults value in the serverStatus command. This data is only available on Linux systems.
Alone, page faults are minor and complete quickly; however, in aggregate, large numbers of page fault typically indicate that MongoDB is reading too much data from disk and can indicate a number of underlying causes and recommendations. In many situations, MongoDB’s read locks will “yield” after a page fault to allow other processes to read and avoid blocking while waiting for the next page to read into memory. This approach improves concurrency, and in high volume systems this also improves overall throughput.
If possible, increasing the amount of RAM accessible to MongoDB may help reduce the number of page faults. If this is not possible, you may want to consider deploying a shard cluster and/or adding one or more shards to your deployment to distribute load among mongod instances.
So, I tried the recommended command, which is terribly unhelpful:
PRIMARY> db.serverStatus().extra_info
{
"note" : "fields vary by platform",
"heap_usage_bytes" : 36265008,
"page_faults" : 4536924
}
Of course, I could increase the server size (more RAM), but that is expensive and seems to be overkill. I should implement sharding, but I'm actually unsure what collections need sharding! Thus, I need a way to isolate where the faults are happening (what specific commands are causing faults).
Thanks for the help.
We don't really know what your data/indexes look like.
Still, an important rule of MongoDB optimization:
Make sure your indexes fit in RAM. http://www.mongodb.org/display/DOCS/Indexing+Advice+and+FAQ#IndexingAdviceandFAQ-MakesureyourindexescanfitinRAM.
Consider that the smaller your documents are, the higher your key/document ratio will be, and the higher your RAM/Disksize ratio will need to be.
If you can adjust your schema a bit to lump some data together, and reduce the number of keys you need, that might help.

How to fix Redis "memory leak"

I'm using a redis memory store on dotcloud but despite expiring keys its used_memory never drops back down again. Using flushdb or flushall from redis-cli doesn't cause the used_memory to drop from it's ~20Mb. I've had the same problem on RedisToGo.
Anyone know how am I managing to fill it up? and how can I avoid doing this? Perhaps there are certain characters you shouldn't put into redis values or keys? I'm using it with EM and resque from a heroku rails app.
Redis also has a mem_fragmentation_ratio (eg: 2.5), so using both values would likely lead to more accurate measurements. At very low used_memory levels (eg: near-zero) the fragmentation can be quite high, and to mitigate this you would need to stop/start the redis instance manually.
RedisToGo may be reporting real memory usage in this manner, as a combination of used_memory x mem_fragmentation_ratio.

Resources