I have the following model.
Events are published by my back-end services. Webhooks are processed by a web-app that simply queues jobs based on the event that is fired. The queueing service subsequently can do things such as make calls to other back-end services. The issue with this model is that events that are missed, e.g when the webhook processor app experiences downtime, are lost. What are best practices for tracking and replaying these missed events?
I know to avoid the anti-pattern database as a queue, and it seems like the solution would involve some kind of message queue. Hoping that someone that has solved this problem could shed some light.
Related
I have a system where losing messages from Azure Service Bus would be a disaster, that is, the data would be lost forever with no practical means to repair the damage without major disruption.
Would I ever be able to rely on ASB entirely in this situation? (even if it was to go down for hours, it would always come back up with the same messages it was holding when it failed)
If not, what is a good approach for solving this problem where messages are crucial and cannot be lost? Is it common to simply persist the message to a database prior to publishing so that it can be referred to in the event of a disaster?
Azure Service Bus doesn’t lose messages. Once a message is successfully received by the broker, it’s there and doesn’t go anywhere. Where usually things go wrong is with message receiving and processing, which is user custom code. That’s one of the reasons why Service Bus has PeekLock receive mode and dead-lettering based on delivery count.
If you need strong guarantees, don’t reinvent the wheel yourself. Use messaging frameworks that do it for you, such as NServiceBus or MassTransit.
With azure service bus you can make this and be sure 99.99% percent,
at worst case you will find your message at the dead-letter queues but it will be never deleted.
Another choice is to use Azure Storage Queue and setting TTL to -1 it will give a infinity life time ,
but because i'am a little bit an old school and to be sure 101% I would suggest an manual solution using azure table storage,
so it's you who decide when add/delete or update a ligne because the criticty of information and data that you work with
We are building an application, where IoT devices (temperature sensor) will be pushing data to Azure IoT Hub.
And there would be a webjob which will be reading this data and pushing it in database (after rolling it up along with raw).
We also need a feature on web application, where a user can subscribe to any room/area and we need to push current temperature to his screen (whenever it changes). And this is required only when user is on that screen.
We were planning to have redis pub/sub for this task.
Webjob can publish this data to redis pub/sub (along with db). And webApplication will subscribe to Redis PubSub (only for the users who has subscribe to web server using signalR).
Any thoughts on this design? Is Redis PubSub is a good choice in this case?
Usually I prefer to use message queue such as RabbitMQ to do such kind of work.
Redis do support pub/sub, and make it every simple and fast. It is a good choice if you ONLY NEED pub/sub.
RabbitMq, in another hand, has more feathers, and, for me it is easy to debug with.
What's more, you need to think more about high availability/persistence. For redis, you may need to implement it by yourself, but for message queue(s), they may already have solutions.
I'm hoping someone can point me in the right direction. I see a chat example for Actors at https://azure.microsoft.com/en-us/documentation/articles/service-fabric-reliable-actors-pattern-distributed-networks-and-graphs/#smart-cache-code-sample-groupchat but that example only shows part of the chat story. Chat's a good example for me because it's similar to the problem I'm trying to solve.
For my problem, clients that push messages into the Actor network also need to receive updates when the state of that network changes. I believe the obvious tool for this is SignalR, but I'm kind of stuck at that point. The Actor SDK doesn't seem to provide a reliable way of streaming state changes out of an Actor. And from what I read, Actor Events don't seem to be reliable enough for this scenario (I'm guessing that's the case, since the documentation says "best effort").
So take the chat example on the SF site. Where do I go from there? How do I subscribe to Actor updates from an ASP.NET Signalr Hub?
The best answer that I've come up with so far is to move pub/sub stuff out of Service Fabric Actors and over to another tool. For example, for my particular problem, I use Redis's pub/sub feature.
Typically you want a 1-to-1 relationship between client and actor or 1-to-many relationship between client to actor. With that being said I would design it so that the client sends messages to the actor, and the actor is responsible for ensuring the message is delivered to the "session" (via reminder). The session I view as a stateful service via reliable collection for that "session". As updates reach the session they call the hub method to broadcast the changes to the appropriate clients.
The Problem
Our liferay system is the basis to synchronize data with other web-applications.
And we use Model Listeners for that purpose.
There are a lot of web-service calls and database updates through the listeners and consequently the particular action in liferay is too slow.
For example:
On adding of a User in liferay we need to fire a lot of web-service calls to add user details and update other systems with the userdata, and also some liferay custom tables. So the adding of User is taking a lot of time and in a few rare cases the request may time-out!
Since the code in the UserListener only depends on the User Details and even if there is any exception in UserListener still the User would be added in Liferay, we have thought of the following solution.
We also have a scheduler in liferay which fixes things if there was some exception while executing code in Listeners.
Proposed Solution
We thought of making the code in UserListener asynchronous by using Concurrency API.
So here are my questions:
Is it recommended to have concurrent code in Model Listeners?
If yes, then will it have any adverse effect if we also update Liferay custom tables through this code, like transactions or other stuff?
What can be other general Pros and Cons of this approach?
Is there any other better-way we can have real-time update to other systems without hampering User-experience?
Thank you for any help on this matter
It makes sense that you want to use Concurrency to solve this issue.
Doing intensive work like invoking web services etc in the thread that modifies the model is not really a good idea, apart from the impact it will have on user experience.
Firing off threads within the models' listeners may be somewhat complex and hard to maintain.
You could explore using Liferay's Message Bus paradigm where you can send a message to a disconnected message receiver which will then do all the intensive work outside of the model listener's calling thread.
Read more about the message bus here:
Message Bus Developer Guide
Message Bus Wiki
I have a domain (read) and reporting (write) database on the same machine.
Currently, events are raised and put on an in memory queue and then the corresponding handlers are called to update the reporting database.
What concerns if there is an issue with the reporting database and for some reason writing fails for an event. I suppose this is where NserviceBus etc would be useful but at this stage we do not have the time to invest to look into it.
Now if new events are being raised, should I not process them until I get the problem event processed? Would this be manual intervention? Also other events will all get queued behind the problem event and nothing gets updated in the reporting database.
Also I suppose I need to persist the events just in case the machine goes down.
I'm afraid you're going to implement your own but poorer NServiceBus in the end. You will not save time this way. NSB is already out there, just read the manual and don't reinvent the wheel.