I have a system where losing messages from Azure Service Bus would be a disaster, that is, the data would be lost forever with no practical means to repair the damage without major disruption.
Would I ever be able to rely on ASB entirely in this situation? (even if it was to go down for hours, it would always come back up with the same messages it was holding when it failed)
If not, what is a good approach for solving this problem where messages are crucial and cannot be lost? Is it common to simply persist the message to a database prior to publishing so that it can be referred to in the event of a disaster?
Azure Service Bus doesn’t lose messages. Once a message is successfully received by the broker, it’s there and doesn’t go anywhere. Where usually things go wrong is with message receiving and processing, which is user custom code. That’s one of the reasons why Service Bus has PeekLock receive mode and dead-lettering based on delivery count.
If you need strong guarantees, don’t reinvent the wheel yourself. Use messaging frameworks that do it for you, such as NServiceBus or MassTransit.
With azure service bus you can make this and be sure 99.99% percent,
at worst case you will find your message at the dead-letter queues but it will be never deleted.
Another choice is to use Azure Storage Queue and setting TTL to -1 it will give a infinity life time ,
but because i'am a little bit an old school and to be sure 101% I would suggest an manual solution using azure table storage,
so it's you who decide when add/delete or update a ligne because the criticty of information and data that you work with
Related
I have a simple scenario where I want to take the diff between current value of a parameter and previous value from IoT hub telemetry messages and attach this result and send to Time Series Insights environment (via an event hub if required so).
How can I achieve this? I am studying about Azure functions but not able to figure out how to exactly go about it.
The minimum timestamp difference between messages is 1 second and only edge devices (at max perhaps 3) will send the telemetry data. Each edge device might be collecting data from around 500 devices.
I am looking for a guidance on logical steps involved and a few critical pieces of Python code
Are these telemetry messages or property changes? Also what's the scale (number of devices)? To do this effectively you need to ensure you have both the current and previous values, which means storing the last reported value and timestamp externally as it could be a long time between. The Event Hub is not guaranteed to have all past messages (default is 24h), so if there's a long lag between messages it's not the right store to rely on.
Durable Entities can be used to store state (using something similar to the Actor Model). These are persisted in Azure Storage so at extremely high throughput a memory-only calculation option might make sense with delayed persistence, but you can build a memory-caching layer into your function to help if needed. This is likely going to be the best bet for what you want to do.
For most people the performance hit of going to Azure storage and back is minimal and Durable Entities will be the easiest path forward.
If you are doing it in a near-real time stream, the best solution is to use Azsure Streaming Analytics using the LAG operator. ASA has a bunch of useful features that you will need such as the PARTITION BY and event ordering policies. Beware, ASA can be expensive to run and hard to work with, but is a good service for commercial solutions.
If you don't need near-real time, a plain 'ol python script that queries (blob) persisted data is a good option, and can be wrapped up in an Azure function if it doesn't take too long to run.
Azure functions are not recommended for stateful message processing. You simply have insufficient control of the number of function instances running, the size of the batch, etc. So it is impossible to consistently and confidently know what the 'previous' timeseries value is. With Azure functions, you have to develop assuming that concurrency is never going to be an issue, which you cannot do with streaming IoT data.
I want to create a CQRS and Event Sourcing architecture that is very cheap and very flexible and very uncomplicated.
I want to make sure that events never fail to at least reach the publisher/event store, ever, ever, because that's where business is.
Now, i have several options in mind:
Azure
With azure, i seem to not know what to use.
Azure service bus
Azure Function
Azure webjob (i suppose this can be replaced with Azure functions)
?? (something else i forgot or dont know?)
How reliable are these azure server-less solutions??
Custom
For this i am thinking of using RabbitMQ, the problem is the cost of a virtual machine to run it.
All in all, i want:
Ability to replay the messages/events in case of failure.
Ability to easily add subscribers.
Ability to select the subscribers upon which to replay the messages.
The Event store should be able to store very large sizes of event messages (or how else shall queue an image or file??).
The event store MUST NEVER EVER get chocked, or sleep.
Speed of implementation/prototyping would be an added
advantage.
What does your experience suggest?
What about other alternatives? (eg: apache-kafka)?
Why not run Event Store? Created by Greg Young himself. Host where you need.
I am a java user, I have been using hornetq (aka artemis which I dont use) an alternative to rabbitmq for the longest; the only problem is it does not support replication but gets the job done when it comes to eventsourcing. For your custom scenario, rabbitmq is a good choice but try running it on a digital ocean instance for low costs. If you are looking for simplicity and flexibility you have only 2 choices , build your own or forgo simplicity and pick up apache kafka with all its complexities but will give you flexibility. Again you can also build an eventstore with mongodb. https://www.mongodb.com/blog/post/event-sourcing-with-mongodb
Your requirements are too vague to make the optimal choice. You need to consider a lot of things, one of them would be, for instance, the numbers of events per one aggregate, the number of aggregates (note that this has to be statistical). Those are important primarily because if you allow tens of thousands of events for each aggregate then you would need to have snapshotting which adds complexity which you might not need.
But for regular use cases you could just use a relational database like Postgres as your (linearizable) event store. It also has a listen/notify functionality to you would not really need any message bus either and your application could be written in a reactive way.
I want to get job change notifications from Azure Media Services using Storage queue. I read the statement It is also possible that some state change notifications get skipped here.
Can someone tell me how reliable is this service ?. Is it a very small percentage of all messages or is it quite visible ?
We do not provide a guarantee on delivery, but I would say that it is highly reliable.
There are cases where notifications can get dropped (a storage outage so that Media Services cannot deliver notifications to a queue for an extended period of time for example). I would recommend that you have a fallback plan that if a job hasn’t been updated by notification for some expected duration, that you have backup code to GET the job details from the API directly.
The “expected duration” is a little bit difficult to predict since job processing times vary by operation and input content size/complexity. There also can be queuing depending on the number of jobs submitted and the number of reserved units you have.
Hope that helps,
John
Reading the documentation, Azure EventHubs is meant for:
Application instrumentation
User experience or workflow processing
Internet of Things (IoT) scenarios
Can this be used for any transactional data, handling revenue or application sensitive data?
Based on what I read, looks like it is meant for handling data that one should not be worried about any data loss. Is this the case?
It is mainly designed for large scale ingestion of data. That is why typical scenario's include IoT solutions which consists of a multitude of devices sending mass amounts of telemetry data.
To allow for this kind of scale it does not include some features other messaging service, like Azure Service Bus, do have. I think this blog does a good job of listening the differences. Especially the section Use Case explains things very well:
From a target use case perspective if we consider some of our typical enterprise integration patterns then if you are implementing a pattern which uses a Command Message, or a Request/Reply Message then you probably want to use Azure Service Bus Messaging. RPC patterns can be implemented using Request/Reply messages on Azure Service Bus using a response queue. These are really about ESB and EAI style messaging patterns where you want to send messages between applications and probably want to use other features such as property based routing.
Azure Event Hubs is more likely to be used if you’re implementing patterns with Event Messages and you want somewhere reliable to send them that is capable of dealing with a massive scale but will allow you to do stuff with the events out of process.
With these core target use cases in mind it is easy to see where the scale differences come into play. For messaging it’s about one application telling one or more apps to DO SOMETHING or GIVE ME SOMETHING. The alternative is that in eventing the applications are saying SOMETHING HAS HAPPENED. When you consider this in typical application scenarios and you put events into the telemetry and logging space you can quickly see that the SOMETHING HAS HAPPENED scenario will produce a lot more traffic than the other.
Now I’m not saying that you can’t implement some messaging type functions using event hubs and that you can’t push events to a Service Bus topic as in integration there are always different requirements which result in different implementation scenarios, but I think if you follow the above as a general rule then you will usually be on the right path.
That does not mean however, that it is only capable of handling data that one should not be worried about any data loss. Data is stored for a configurable amount of time and if necessary, this data can be read from an earlier point in time.
Now, given your scenario I do not think Event Hub is the best fit. But truth to be told, I am not sure because you will have to elaborate more on what you want to do exactly.
Addition
The idea behind Event Hubs is that you will get at least once delivery at great scale. (Source). See also this question: Does Azure Event Hub guarantees at least once delivery?
There must be a solution to this already but i'm having an issue finding it.
We have data stored in table storage and we are syncing it with an offline capable client web app over a restful api (Web API).
We are using a high watermark(currently a date time) to make sure we only download the data which has changed/added.
e.g. clients/get?watermark=2013-12-16 10:00
The problem we are facing with this approach is what happens in the edge case where multiple servers are inserting data whilst a get happens. There is a possibility that data could be inserted with a timestamp lower than the client's timestamp.
Should we worry about this or can someone recommend a better way of doing this?
I believe our main issue is inserting the data into the store. At this point there is no way to guarantee the timestamp used or the Azure box has the correct time against the other azure boxes.
Are you able to insert data into queues when inserting data into table storage? If you are able to do so, you can build off a sync that monitors the queue and inserts data based upon what's in the queue. This will allow you to not worry about timestamps and date-sync issues.
Will also make your table storage scanning faster, as you'll be able to go direct to table storage by Partition/Row keys that would presumably be in the queue messages
Edited to provide further information:
I re-read your question and realized you're looking to sync with many client applications and not necessary with a single premise-sync system which I assumed originally.
In this case, I'm slightly tweaking my suggestion:
Consider using Service Bus and publishing messages to a Service Bus Topic, everytime you change/insert Azure Table Story (ATS) entity. This message could contain an individual PartitionKey/RowKey or perhaps some other meta information as to which ATS entities have been changed.
Your individual disconnectable clients would subscribe to the Service Bus Topic through an individual Service Bus Topic Subscription and be able to pull and handle individual service bus messages and sync whatever ATS entities described in those messages.
This way you'll not really care about last-modified timestamps of your entities and only care about handling pulling messages from the service bus topic. If your client pulls all of the messages from a topic and synchronizes all of the entities that those messages describe, it has synchronized itself, regardless of the number of workers that are inserting data into ATS and timestamps with which they insert those entities.
When you're working in a disconnected/distributed environment is hard to keep things in sync based on actual time (for this to work correctly the time needs to be in sync between all actors).
Instead you should try looking at logical clocks (like a vector clock). You'll find plenty of Java examples but if you're planning to do this in .NET the examples are pretty limited.
On the other hand you might want to take a look at how the Sync Framework handles synchronization.