Manage Future Event Notification Using Node.js, Redis, and CouchDB - node.js

I'm trying to manage date/time event notifications using Node.js on the server. Is there a programming pattern that I can use and apply to JavaScript?
Currently, I'm using named setTimeouts and Redis to store a boolean value for each timeout. When the timeout fires it checks Redis for a boolean value. If it returns true, the notification executes. If the value returns false, this means the user has removed the event and there is no notification.
This solution works, but I don't believe it will be scale-able for several reasons:
1) Events could be days away. I don't trust Redis to store these event for that long.2) There could potentially be thousands of events and I don't want setTimeouts running all over the place. Especially after the event was removed.
I know this problem has been solved, so I'm hoping someone can point me to a resource or offer up a common pattern.

Are you looking for something like node-cron?

You can use redis namesapce nodifications,
yes now redis have new feautres for when key will expires its invoke the call to event .

Related

How to secure reliable publication when send event about successful db insertion to Event Hub?

Context:
In Azure function with EventHubTrigger, I save data mapped from handled event to database (through the Entity framework). This action performs synchronously
Trigger a new event about successful data insertion using event hub producer. This action is async
Handle that triggered event at some other place
I guess it might happen that something fails during saving data, so I am wondering how to prevent inconsistency and secure that event is not sent if it should not.
As far as I know Azure Event Hub has no outbox pattern implemented yet, so I guess I would need to mimic it somehow.
I am also thinking about alternative and a bit smelly solution to make this publish event method synchronous in step 2 (even if nature of the event-driven is to be async) and to add an addition check between step 1 and step 2 - to make sure that everything is saved in db. Only if that condition is fulfilled, event is going to be triggered (step 3).
Any advice?
There's nothing in the SDK that would manage distributed transactions on your behalf. The simplest approach would likely be having a column in your database that allows you to mark when the event was published, and then have your function flow:
Write to the database with the "event published" flag unset; on failure abort.
Publish the event; on failure abort. (the data stays in written)
Write to the database to set the "event published" flag.
You'd need a second Function running on a timer that could scan your database for rows older than XX minutes ago that still need an event, which then do steps 2 and 3 from your initial flow. In failure scenarios, you will have some potential latency between the data being written and the event published or may see duplicate events. (Event Hubs has an at least once guarantee, so you'll need to be able to handle duplicates regardless.)

Redis Streams: How to manage perpetual subscription and BLOCK behaviour?

I am using Redis in an Express application. My app is both a publisher and consumer of streams, using a single redis connection (redis.createClient).
I have a question on the best way to manage a perpetual subscription (with xreadgroup).
Currently I am doing this:
const readStream = () => xreadgroup('GROUP' appId, consumerId, 'BLOCK', 1, 'COUNT', 1, 'STREAMS' key, '>')
.then(handleData)
.then(() => setImmeadiate(readStream));
Where xreadgroup is just a promisified version of node-redis' xreadgroup.
My question - What is the appropriate usage of BLOCK? If I block indefinitely or for a long period, then my client cannot publish any messages (with xadd) until it is unblocked, or the block times out. Since I must use some sort of loop/recursion in order to keep reading events, BLOCK appears to be fairly unnecessary; can I just leave it off and is this the expected usage?
Likewise, is using setImmeadiate appropriate or would process.nextTick or an async loop be preferred?
There is very little documentation in node-redis and the few examples simply read the messages once after blocking and do not produce/consume on the same client.
Not an expert on the subject, but I'd like to share some thoughts that might help.
I'm not sure if node-redis can "stack" multiple commands, meaning - will it be able to fire new commands while waiting for the XREADGROUP to complete?
From your description, seems like that's what's happening. In that case, I suggest you create a dedicated connection to call XREADGROUP - this way you can publish and listen without blocking one another.
You don't need to use the BLOCK; but if your goal is to listen to all events and wait for those not yet published, using it might be wise and will give you better performance while making less calls to redis.
setImmediate is probably good, especially using BLOCK. If you don't use it, then it might be good to add a little bit of timeout between calls - without the BLOCK calls will answer return almost immediately. You can check this for more details.
Friendly reminder: don't forget to ACK your messages or use NOACK (might be ok depending on your use case):
consumer groups require explicit acknowledgment of the messages successfully processed by the consumer, via the XACK command.
The NOACK subcommand can be used to avoid adding the message to the PEL in cases where reliability is not a requirement and the occasional message loss is acceptable.
Source: https://redis.io/commands/xreadgroup

Redis stored procedures like functionality

I'm trying to implement a basic service that receives a msg and time in the future and once the time arrives, it prints the msg.
I want to implement it with Redis.
While investigating the capabilities of Redis I've found that I can use https://redis.io/topics/notifications on expired keys together with subscribing I get what I want.
But I am facing a problem, if the service is down for any reason, I might lose those expiry triggers.
To resolve that issue, I thought of having a queue (in Redis as well) which will store expired keys and once the service is up, it will pull them all the expired values, but for that, I need some kind of "stored procedure" that will handle the expiry routing.
Unfortunately, I couldn't find a way to do that.
So the question is: is it possible to implement with the current capabilities of Redis, and also, do I have alternatives?

Axon creating aggregate inside saga

I'm not sure how to properly ask this question but here it is:
I'm starting the saga on specific event, then im dispatching the command which is supposed to create some aggregate and then send another event which will be handled by the saga to proceed with the logic.
However each time i'm restarting the application i get an error saying that event for aggregate at sequence x was already inserted, which, i suppose is because the saga has not yet been finished and when im restarting it it starts it again by trying to create new aggregate.
Question is, is there any way in the axoniq to track progress of the saga? Like should i set some flags when i receive event and wrap in ifs the aggregate creation?
Maybe there is another way which i'm not seeing, i just dont want the saga to be replayed from the start.
Thanks
The solution you've posted definitely would work.
Let me explain the scenario you've hit here though, for other peoples reference too.
In an Axon Framework 4.x application, any Event Handling Component, thus also your Saga instances, are backed by a TrackingEventProcessor.
The Tracking Event Processor "keeps track of" which point in the Event Stream it is handling events. It stores this information through a TrackingToken, for which the TokenStore is the delegating piece of work.
If you haven't specified a TokenStore however, you will have in-memory TrackingTokens for every Tracking Event Processor.
This means that on a restart, your Tracking Event Processor thinks "ow, I haven't done any event handling yet, let me start from the beginning of time".
Due to this, your Saga instances will start a new, every time, trying to recreate the given Aggregate instance.
Henceforth, specifying the TokenStore as you did resolved the problem you had.
Note, that in a Spring Boor environment, with for example the Spring Data starter present, Axon will automatically create the JpaTokenStore for you.
I've solved my issue by simply adding token store configuration, it does exactly what i require - track processed events.
Basic spring config:
#Bean
fun tokenStore(client: MongoClient): TokenStore = MongoTokenStore.builder()
.mongoTemplate(DefaultMongoTemplate.builder().mongoDatabase(client).build())
.serializer(JacksonSerializer.builder().build())
.build()

Logic App - retrieve a batch of messages from a sessions-enabled Service Bus Queue

So I'd like to perform the following - each N seconds get X messages from a sessions-enabled queue (peek-lock) and then send them together(in a single request) up to the next processing point. Here are options I've come up so far -
"Get messages from a queue" action
Seems like it requires me to hardcode a session id beforehand(?), which is not that handy.
"Batch receiver" logic app
It's still in preview
Custom trigger
Seems like it will work, but requires extra coding.
Any suggestions on how to effectively achieve it via Logic Apps with stuff available today?
You don't need Sessions specifically to retrieve a specific number of messages in a batch....just read 10 message then do whatever processing you need.
If you need to also retrieve the messages in order, then yes, use a Session enabled Queue where all callers use the same SessionId.
Keep in mind, the SessinId is an arbitrary Application value so you can use the same value as the Queue name if you want. I don't see this as any kind of hurdle and it's just how it works.
You can use a Recurrence Trigger at whatever interval you need.
Sessions are primarily for grouping messages. The SessionID can be any specific arbitrary value, HighPriority/LowPriority or a value determined at runtime, such as a guid, if you're doing Correlation among specific related messages. Now that I think about it, the FIFO side affect seems more to support correlation scenarios.
One way to address this is to set the maximum concurrency on the logic app.
Go to the settings of the service bus receiving action:
Then choose to enable concurrency for 10:

Resources