Does Redis emit any kind of event when the TTL expires for a particular key?
I am looking to have a count of keys added in Redis for my application at any given point of time. I am having a increment counter when I am generating the key, similarly I would like to have a decrement counter when the key expires (TTL expires).
I know I can acheive this by executing 'KEYS', but I am wondering if Redis generates some kind of event which I can capture when key expires.
I will use NodeJS to capture the event.
Thanks,
Raghu.
Do not use KEYS in production - it is a potentially long-running, RAM-consuming, service-denying operation.
Yes, as of v2.8.0 Redis does have what you're looking for. Read the Redis Keyspace Notifications page, specifically about setting up the x flag and subscribing to relevant channels.
Note that while this is a great way to use Redis, PubSub messages' delivery is not guaranteed so your counters could shift over time if messages are lost. In this case it would probably be good to periodically scan your database (using the SCAN command, not KEYS) to refresh them.
Related
I have an API endpoint, which creates and sends a few transactions in strict sequence. Because I don't wait for results of these transactions, I specify a nonce number for each of them to execute them in the right order.
This endpoint is built using AWS Lambda function. So, if I have many concurrent requests, the lambda runs in concurrent mode. In this case, several concurrent instances can get the same nonce (I'm using eth.getTransactionCount method to get the latest transaction count) and send a few transactions with the same nonce. Therefore, I receive errors because instead of creating a new transaction, it tries to replace an existing one.
Basically, I need a way to check if a nonce is already taken right before the transaction sending or somehow reserve a nonce number (is it even possible?).
The web3 getTransactionCount() only returns the amount of already mined transactions, but there's currently no way to return the highest pending nonce (for an address) using web3.
So you'll need to store your pending nonces in a separate DB (e.g. Redis). Each Lambda run will need to access this DB to get the highest pending nonce, calculate one that it's going to be using (probably just +1), and store this number to the DB so that other instances can't use it anymore.
Mind that it's recommended to implement a lock (Redis, DynamoDB) to prevent multiple app instances from accessing the DB and claiming the same value at the same time.
Basically, I need a way to check if a nonce is already taken right before the transaction sending or somehow reserve a nonce number (is it even possible?).
You should not.
Instead, you should manage nonce in your internal database (SQL, etc.) which provides atomic counters and multiple readers and writers. You only rely to the network provided nonce if 1) your system has failed 2) you need to manually reset it.
Here is an example code for Web3.py and SQLAlchemy.
I'm trying to implement a basic service that receives a msg and time in the future and once the time arrives, it prints the msg.
I want to implement it with Redis.
While investigating the capabilities of Redis I've found that I can use https://redis.io/topics/notifications on expired keys together with subscribing I get what I want.
But I am facing a problem, if the service is down for any reason, I might lose those expiry triggers.
To resolve that issue, I thought of having a queue (in Redis as well) which will store expired keys and once the service is up, it will pull them all the expired values, but for that, I need some kind of "stored procedure" that will handle the expiry routing.
Unfortunately, I couldn't find a way to do that.
So the question is: is it possible to implement with the current capabilities of Redis, and also, do I have alternatives?
Right now I have transaction syncs with clients that go offline and online frequently. This means that creation of a transaction document (when it goes into pouch) doesn't align with the point that it is entered into Couch.
Is there a way for me to tag these documents with a timestamp on confirmation of replication? I see there are advanced replication schedulers but the completed flag does not apply to live replication which is what we are using.
I have tried tagging the document before syncing it, but this doesn't account for issues of network delay or backend delay of replication. It simply is the time I started the sync of that document, there's no promise that it arrived at that point in CouchDB.
You would need to use an add-on like spiegel (using on_change documents to call back to an update function) or another (pouchdb?) client to observe the changes feed and add the timestamp of when it was available to that client from the couchdb (which might be a little delayed).
Such a client would be in danger of creating an infinite loop as #Flimzy indicated in the comments, unless it uses a rule to not re-update docs with existing timestamps so that it does not write when it is re-triggered by itself and therefore stops retriggering itself. Spiegel has support for such a rule and/or stopping an infinite loop could be part of an update function.
My aim is to log unique queries per session by writing custom QueryHandler implementation as logging all queries causes performance hit in our case.
Consider the case : If a user connected to cassandra cluster with java client and performs "select * from users where id = ?" 100 times.
And another user connected from cqlsh and performed same query 50 times. so i want to log only two queries in this case. For that i need a unique session id per login.
Cassandra provides below interface where all requests lands up but none of its apis provide any sessionId to differentiate between two different session described in above case.
org.apache.cassandra.cql3.QueryHandler
Note: I am able to get remoteaddress/port but i want some id which is created when user logged in and get destroyed when he disconnects.
In queryState.getClientState().getRemoteAddress() the address + port will be unique per tcp connection in the sessions pool. There can be multiple concurrent requests over each connection though, and a session can have multiple connections per host. There is also no guarantee the same tcp connection will be used from one request to another on client side.
However a single session cannot be connected as 2 different users (part of the initialization of connection) so the scenario you described isn't possible from the same Session object perspective. I think just using the address as the key for uniqueness will be all you can do given how the protocol/driver works. It will at least dedup things a little.
Are you actually processing your logging inline or are you pushing it off async? If using logback it should be using async appender but if your posting events synchronously to another server, might be better just to throw all the events on a queue and let it do the deduping in another thread so you don't hurt latency.
I'm trying to manage date/time event notifications using Node.js on the server. Is there a programming pattern that I can use and apply to JavaScript?
Currently, I'm using named setTimeouts and Redis to store a boolean value for each timeout. When the timeout fires it checks Redis for a boolean value. If it returns true, the notification executes. If the value returns false, this means the user has removed the event and there is no notification.
This solution works, but I don't believe it will be scale-able for several reasons:
1) Events could be days away. I don't trust Redis to store these event for that long.2) There could potentially be thousands of events and I don't want setTimeouts running all over the place. Especially after the event was removed.
I know this problem has been solved, so I'm hoping someone can point me to a resource or offer up a common pattern.
Are you looking for something like node-cron?
You can use redis namesapce nodifications,
yes now redis have new feautres for when key will expires its invoke the call to event .