Service Fabric Stateful services with single DB Persistance service - azure

I'm about to start a project that requires very fast response times and high availability, i have done a few service fabric projects before so i'm feeling pretty confident about that.
I'm currently leaning towards a specific design, based on stateful content services as main datasource with a single data persistance service saving to a database of some sort.
Read operations are done by web-api
Write operations are done by Azure service bus communication with Rebus as handler.
Content services
The content services are stateful services which on commit sends a message to the persistance service with the object saved in the reliable dictionary, serialized as json.
The content services them selves will be responsible for json deserialization in the event that they need to restore the data.
Restore scenarios could be when the entire dictionary for some reason is lost or when a reset message is put on the bus.
Persistance service
Recieves a message from the bus and stores the included entity, to a data store (Not yet decided, maybe table storage).
Serves an entire repository of data when a service need to reload data.
Only concerns itself with storing and retrieving data, no integrity checks
I'm really unsure about whether this is a feasible way of designing a system, that also has a high amount of user data.
what are your thoughts on this design?

I ended up pursuing this solution and it works quite well and performs very well but it needs extensive testing in order to make sure that everything works as expected.

Related

Synchronising in memory data between Azure Front Door back end instances

I have a web application in Azure. There are 2 instances, with Azure Front Door being used to route all traffic to the primary instance only, with the secondary one only ever used if the first is unavailable. We have some in memory data that gets updated by users. The database is subsequently updated with the changes. The data is kept in memory as a function of a genuine performance requirement. The issue we have is the data is fetched from the database only on application start up, meaning that if the primary instance becomes unavailable the secondary one could very well have information that is out of date. I was hoping that Front Door could be configured to trigger an API when any sort of switch - from primary to secondary or vice versa - occurs. However I can't find any reference to this in the documentation.
We have a series on web jobs that run, one of which is triggered every minute. However using this to keep the data fresh still doesn't guarantee that an instance will necessarily have the latest information.
Any thoughts on how to rectify this issue very much appreciated.
[EDIT]. Both web apps talk to the same database instance
Unfortunately Azure Front Door doesn't have any native support for firing events to something like an Event Hub but you could stream your logs to one. For example you could stream "FrontDoorAccessLog" to an Event Hub and have a script receive these events. When the "OriginName" value changes you could inform the failover app to update its state via an API.
Is there a reason why both Webapps have their own database if they have to be somewhat synchronized? Why not have both Webapps talking to the same DB?

Service Fabric stateful services in-memory vs external storage

I have a Service Fabric application and it contains two services stateless and stateful. Service Fabric Application Stateless Service: It contains API endpoints to communicate with stateful service. Stateful Service: The data is being stored in Reliable collections i.e in-memory storage.
I have around 15 service fabric microservices that will be communicating with each other based on the requirement. I'm ending up with a lot of proxy calls in order to communicate between the services which is one of the major reasons for performance hindrance.
In order to mitigate this issue, I have a thought to remove stateful service( in-memory storage with Reliable Dictionaries) and use external storage like Azure Cosmos DB as a data storage.
In the new approach, my application will have one stateless service and it will communicate with the external data store ( ex: Cosmo DB). Service Fabric Application Stateless Service: It contains API endpoints to communicate with the storage provider ( Ex: CosmosDB).
Can anyone let us know whether Service fabric in-memory or external storage gives more performance?
Apart from the performance issues with the in-memory storage, it is becoming very challenging to implement the complex queries or do any elastic search or creating reports as we have dependencies between the services.
Is there any other better approach that can really resolve these kinds of issues?
The whole point of using stateful services is to bring the data to where the compute (your service) is. The benefit of this is performance, as there is no network latency for getting the data.
Now, what you are doing is effectively throwing this benefit away by using a stateful service as a central datastore for other services to get data from.
There are at least two option I can think of. The first is to use an external datastore like Cosmos DB and have all services connect to that datastore or, second opion, to convert your stateless services to stateful services and copy/distribute only the portions of the data a given service need to that service. To make it easier to report based on the data you could create read models.
Currently, we have a databse and moving all databse tables as microservices. Inorder to implement stored procedures/ views, we are fetching few services data in a single service and implemting the logic. Do we have an alternative approach for the Sp's/ Views?
You should not try to map a database and its views/stored procedure to some logic and microservices. Instead, try a new view on it. Let each service put their own data into one or more reliable collections. If there is need for a data store with data combined from each service have those services update a so called read model (you'll probably and up having more than one readmodel).
Look up terms like CQRS and read models, they will help with a micro services architecure.
Or have all services connect to, for example, a sql server giving the benefits of stored procedures and views. But do mind that once you use a centralized database, whether it is a sql database or cosmos db database, your micro services are no longer independent services as they all share a single database schema.

Is it a good pattern to use Azure service bus as backup messaging service?

We are considering of a design pattern where
Web service tries to insert data into database
If that call fails and db is not available
then we pass that data into azure service bus
Once the db is back up, some other service will read data from service bus and insert into database.
I personally have not seen this pattern however is there any issue with this design ?
The way queuing system are usually used is slightly different from what you're asking.
Queues allow reliable command execution if the destination resource is not available (database) and balance the load on the resource rather than overwhelming it.
The steps would be:
Web service sends a Service Bus message with the data that needs to be inserted into the database.
A backend service is peeking the messages and tries to insert into the database.
If the operation is failing or the database is not available, the message is retried.

Service Fabric Application with Entity Framework?

I started learning Service Fabric applications, and little confused about stateful Reliable Services.
In stateful Reliable Services state means the data to be stored in the tables in our normal database applications or something else?
Is it possible to use EF with stateful Reliable Services ?
How we can store/retrieve the data to/from database (like Products, Categories, Employess etc...) using EF in Reliable Services?
Any tutorial/help will be much appreciable.
Thanks in advance
There are 2 flavors of reliable services, stateless and stateful. The main difference being that stateful services give access to reliable collections to store your data.
TL;DR
If you are planning to use Entity Framework (EF) and you have no plan for storing data using reliable collections, stick to stateless services.
Q1
In stateful Reliable Services state means the data to be stored in the tables in our normal database applications or something else?
It means you are planning to store the data in Reliable Collections.
Q2
Is it possible to use EF with stateful Reliable Services ?
Yes, even when you use a stateful service you can write logic to store data in EF, and optionally store data in reliable collections (See the use case presented by Oleg in the comments for example) but if you only want to use EF then go for a stateless service. A stateful service only makes sense if you use reliable collections.
Q3
How we can store/retrieve the data to/from database (like Products, Categories, Employess etc...) using EF in Reliable Services?
Create a stateless service, add the EF NuGet packages and write the code as you would normally do.
Additional information
From this quickstart
A stateless service is a type of service that is currently the norm in cloud applications. It is considered stateless because the service itself does not contain data that needs to be stored reliably or made highly available. If an instance of a stateless service shuts down, all of its internal state is lost. In this type of service, state must be persisted to an external store, such as Azure Tables or a SQL database, for it to be made highly available and reliable.
and
Service Fabric introduces a new kind of service that is stateful. A stateful service can maintain state reliably within the service itself, co-located with the code that's using it. State is made highly available by Service Fabric without the need to persist state to an external store.
Reliable Collection can be best described as a No-Sql data store. It is up to you if you want to use this, or have a mix between stateful and stateless services.
For a more in-depth overview of Reliable Collections, read this doc

Best practices for storing data with Azure Functions

I've been working a lot with microservices recently and the common pattern is that every service is responsible for its own data. thus service "A" can not access service "B" data directly without talking to service "B" via some http api or message queue.
Now I've started to pick up some work with azure functions for the first time. I've looked at a fair few examples and they all seem to have any old function just dabbling with data in a shared data store (Which seems like we're going back to the old style of having a massive monolithic database).
I was just wondering if there was a common pattern to follow with data storage when using Function as a Service? And where does the responsibilities lie?
The following screen snippet is an example of the event-driven distributed model of the business processors in the cloud-based solutions without using a monolithic database. More details about this concept and technique can be found in my article Using Azure Lease Blob
Note, that the each Business Context has own Lease Blob for holding a state of the processing with references to other resources such as metadata, config, data, results, etc. This concept allows to create a matrix (multi) dimensional business processing model, where each sub-nested process can have own Lease Blob.

Resources