Redis: only persist non-expiring keys, please? - io

From what I'm seeing in the docs, it appears that Redis only lets you persist all keys, or don't persist at all (to disk).
What I'm trying to do is to only persist keys that don't have a TTL. That is, if I
setex some_key 60 "some data"
// or
set some_key "some data"
expire some_key 60
then don't persist those keys to disk -- ever!
In case this is not possible, I guess the next best solution is to use Memcached for those values, and Redis for what I'd like persisted, but it'd sure be nice if I don't have to go that far..

AFAIK What you are telling is correct it can either persist on can not persist. However, in this scenario Instead of using Memcache I would run two instance of redis one which can persist the keys and one with no persistence. As Creating the redis instance is easy.
Also, In future if there is a situation where you need few keys to be persist you can make the changes easily at application level if you use redis instead of Memcache.

Related

How do I automate redis flushall when it runs out of memory?

My redis db runs out of memory after 2-3 day, I do flush manually. But, I want to fushall automatically in nodejs.
Read about Redis' maxmemory-policy and then choose one of the allkeys-* policies.
you should look why it happens at first place, maybe you could expire keys using tls check this link
Other option would be just set a cronjob to flush redis once in a while(not recommended)

Redis stored procedures like functionality

I'm trying to implement a basic service that receives a msg and time in the future and once the time arrives, it prints the msg.
I want to implement it with Redis.
While investigating the capabilities of Redis I've found that I can use https://redis.io/topics/notifications on expired keys together with subscribing I get what I want.
But I am facing a problem, if the service is down for any reason, I might lose those expiry triggers.
To resolve that issue, I thought of having a queue (in Redis as well) which will store expired keys and once the service is up, it will pull them all the expired values, but for that, I need some kind of "stored procedure" that will handle the expiry routing.
Unfortunately, I couldn't find a way to do that.
So the question is: is it possible to implement with the current capabilities of Redis, and also, do I have alternatives?

AWS Redis Reader endpoint and ioredis

We want our Redis to be more scalable and we want to be able to add more read instances.
I am trying to use this new Reader endpoint: https://aws.amazon.com/about-aws/whats-new/2019/06/amazon-elasticache-launches-reader-endpoint-for-redis
However I dont see any easy or automated way for ioredis to use that approach where I can set up which endpoint will be for writes and which one for reads. Even here I can see the recommended approach at the end is to "manually split": https://github.com/luin/ioredis/issues/387
Do you know any existing solution or good approach where I can set up which endpoints will be used for writes and which one will be used for reads?
The most straightforward for me right now is some kind of "proxy" layer, where I will create two instances of Redis and I will send all writes to the primary endpoint and all reads to Reader endpoint. However I would prefer some better (or well tested) approach.
PS: I tried to "hack it" with Cluster functionality of ioredis, but even the simple connection without any functionality and one - primary endpint - fails with ClusterAllFailedError: Failed to refresh slots cache.
(To have Reader endpoint enabled - the Cluster mode must be off)
Just note about how it ended
We had two instances (or reused the same instance if URL was same)
redis = new Redis(RKT_REDIS_URL.href, redisOptions)
if (RKT_REDIS_READER_URL.href === RKT_REDIS_URL.href) {
redisro = redis
} else {
redisro = new Redis(RKT_REDIS_READER_URL.href, redisOptions)
}
And then used first for writes and other for reads.
redis.hmset(key, update)
redisro.hmget(key, field)
However after some time we have adopted clustered redis and it is much better and can recommend it. Also the ioredis npm module is capable of using it seemlessly (you dont have to configure anything, you just put there configure endpoint which i.e. AWS provides and thats it).
This was our configuration
redisOptions.scaleReads = 'master'
redis = new Redis.Cluster([RKT_REDIS_URL.href], redisOptions)
The options for scaleReads are
scaleReads is "master" by default, which means ioredis will never send
any queries to slaves. There are other three available options:
"all": Send write queries to masters and read queries to masters or
slaves randomly. "slave": Send write queries to masters and read
queries to slaves.
https://github.com/luin/ioredis

Where to store consistent JSON, Redis or global variable?

It's been a while that i am using node for my applications and i was wondering where a global or local variable is stored? (in RAM or CPU cache maybe. guessing RAM. right?) and is it a good idea to store some JSON's that are most of the times static as a global variable and access them right away.
would it be faster than Reading from in-memory database like Redis?
For example let's see i am talking about something like website categories list which is a JSON with some nodes in it.
Most of the times this JSON is constant and even if it gets changed i can refresh the variable with new value because one server app handles all requests.
And when node app starts i can have this initializer function that reads the JSON from in-disk database.
Currently i am using Redis for this situation and when app starts i'm reading this JSON from mySQL and keep it in redis for faster request handling.
But i'm wondering is it a good practice to keep JSON as a global variable and how would it be compared against having it in Redis performance wise?
P.S: I know redis has consistency and keeps value in disk too but i am reading them from mySQL because redis is a caching mechanism for a small part of schema and using initializer gives me a manual sync if needed.
Thanks
I would prefer Redis. Because even if you restart node application data will be there and putting global variables in memory has one disadvantage that at run time if you want them to be changed you are just left with choice of restarting whole application.
Plus while running application you should always query Redis to get data whenever you want.So in future if you want these values to be dynamic it will directly reflect by just changing it in Redis.
You can keep it anywhere you want. You can store them as files and require them while starting your app. I'd prefer this if they do not change.
If you update them, then you can use any database or caching mechanism and read them. It's up to you.
Yes, the variables are stored in memory. They won't persist if the app crashes. So a persistent storage is recommended.

Event when TTL is expired

Does Redis emit any kind of event when the TTL expires for a particular key?
I am looking to have a count of keys added in Redis for my application at any given point of time. I am having a increment counter when I am generating the key, similarly I would like to have a decrement counter when the key expires (TTL expires).
I know I can acheive this by executing 'KEYS', but I am wondering if Redis generates some kind of event which I can capture when key expires.
I will use NodeJS to capture the event.
Thanks,
Raghu.
Do not use KEYS in production - it is a potentially long-running, RAM-consuming, service-denying operation.
Yes, as of v2.8.0 Redis does have what you're looking for. Read the Redis Keyspace Notifications page, specifically about setting up the x flag and subscribing to relevant channels.
Note that while this is a great way to use Redis, PubSub messages' delivery is not guaranteed so your counters could shift over time if messages are lost. In this case it would probably be good to periodically scan your database (using the SCAN command, not KEYS) to refresh them.

Resources