How To Design The Layers in Azure Service Fabric - azure

I have been assigned to think of a layered microservices architecture for Azure Service Fabric. But my experience mostly been on monolithic architectures I can't come up with a specific solution.
What I have thought as of now is like...
Data Layer - This is where all the Code First entities resides along with DBContext.
Business Layer - This is where all the Service Managers would be performing and enforcing the Business Logic i.e. UserManager (IUserManager), OrderManager (IOrderManager), InvoiceManager (IInvoiceManager) etc.
WebAPI (Self Hoted Inside Service Fabric) - Although this WebAPI is inside Service Fabric but does nothing except to receive the request and call respectic Services under Service Fabric. WebAPI Layer would also do any Authentication and Authorization (ASP.NET Identity) before passing on the call to other services.
Service Fabric Services - UserService, OrderService, InvoiceService. These services are invoked from WebAPI Layer and DI the Business Layer (IUserManager, IOrderManager, IInvoiceManager) to perform it's operation.
Do you think this is okay to proceed with?
One theoretical issue though, while reading up for several microservices architecture resources, I found that, all of them suggests to have Business Logic inside the service so that the specific service can be scaled independently. So I believe, I'm violating the basic aspect of microservices.
I'm doing this because, the customer requirement is to use this Business Layer across several projects, such as Batch Jobs (Azure Web Jobs), Backend Dashboard for Internal Employees (ASP.NET MVC) etc. So If I don't keep the Business Layer same, I have to write the same Business Logic again for Web Jobs and Backend Dashboard which I feel is not a good idea. As a simple change in Business Logic would require change in code at several places then.
One more concern is, in that case, I have to go with Service to Service communication for ACID transactions. Such as, while creating an Order, a Order and Invoice both must be created. So in that case, I thought of using Event Driven programming i.e. Order Service will emit an event which the Invoice Service can subscribe to, to create Invoice on creation of Order. But the complications are if the Invoice Service fails to create invoice, it can either keep trying do that infinitely (which is a bad idea I think), or emit another event to Order Service to subscribe and roll back the order. There can be lots of confusion with this.
Also, I must mention that, we are using a Single Database as of now.
So my questions are...
What issue do you see with my approach? Is it okay?
If not, please suggest me a better approach. You can guide me to some resources for implementation details or conceptual details too.
NOTE : The requirement of client is, they can scale specific module in need. Such as, UserService might not be used much as there won't be many signups daily or change in User Profile, but OrderService can be scaled along as there can be lots of Orders coming in daily.
I'll be glad to learn. As this is my first chance of getting my hands on designing a microservices architecture.

First of all, why does the customer want to use Service Fabric and a microservices archtecture when it at the same time sounds like there are other parts of the solution (webjobs etc) that will not be a part of thar architecture but rather live in it's own ecosystem (yet share logic)? I think it would be good for you to first understand the underlying requirements that should guide the architecture. What is most imortant?
Scalability? Flexibility?
Development and deployment? Maintinability?
Modularity in ability to compose new solutions based on autonomous microservices?
The list could go on. Until you figure this out there is really no point in designing further as you don't know what you are designing for...
As for sharing business logic with webjobs, there is nothing preventing you from sharing code packages containing the same BL, it doesn't have to be a shared service and it doesn't mean that it has to be packaged the same way in relation to its interface or persistance. Another thing to consider is, why do you wan't to run webjobs when you can build similar functionality in SF services?

Related

How best to integrate a search service for multiple microservices with an API gateway

We have a bunch of microservices with an API gateway in front. We have created a search service that would search across the services and aggregate the data for the client app. We have two approaches to go about it:
Let the API gateway contact search service directly, giving it the context about which microservice to search in.
Let the API gateway contact each microservice directly (which we are already doing currently), and let each of the microservice handle contacting (or not) the search service by themselves.
I liked the 2nd approach because it would cleanly hide the search abstraction away from the API gateway, but since each microservice is responsible for contacting the search service, there might be search handling logic duplication across the services, which we would want to avoid since the number of services we have is huge.
What would you suggest would be the better pattern to go ahead with? Is there an even better architecture than these that I could leverage?
I don't know what is exactly the responsibility of the search service, but If, as I imagine, is a service that aggregates information of other services to show to the user I would handle this problem using one of these two approaches:
The search service is the one that calls periodically each of the microservices asking for the information that needs and processing it accordingly.
If the search service doesn't know when this information will be generated and you want to update it near real time without the cost of doing an intensive polling, I will use a message broker where the different microservices will produce events with the information and the search service will consume them and update its information accordingly
Either way, I wouldn't use the API gateway as an orchestrator to call the different microservices because the API gateway shouldn't know anything about the business logic, its responsibility is only to be a single point of entrance for outside calls and maybe other cross-cutting tasks as audit or authorization

Application per service in Service Fabric

I’m designing my service fabric cluster. I’m between creating one app and hosting all the services inside vs creating 1 app per service.
I didnt find clear guidelines on this. The main advantage I see for 1 app per service is that we can deploy each service independently since it has its own app. We can also host the code in different repos. Are there downsides for this?
A better approach is to have one Application per set of services where the services provide a cohesive function. An Application should be an umbrella for n number of services which are related in their function, for instance they may be within the same bounded context or be related to a common operational unit. However, this doesn't mean they have to be deployed / updated in unison.
Services can be deployed independently within an ApplicationType if you move away from using the DefaultServices construct. You can read about why Default Services should be avoided in Production here - essentially they create a rigid deployment strategy and you lose some of the power of Service Fabric parameterization available via PowerShell.
The concept of an Application may seem at odds with a Microservice architecture, but remember its just a logical grouping, single services within an Application are still independently deployable.
Lots of useful info in the Application Model docs.
The main advantage I see for 1 app per service is that we can deploy each service independently since it has its own app.
You can also deploy\upgrade individual services in same application without affecting the other deployed services. Please check about differential packaging here and here
We can also host the code in different repos
Generally when we split our code into separate repositories is because we have a domain boundary that we don't want to track with other services, for example, services owned by different teams or deployed on different schedule, in this case would make sense to have them as separate applications.
Are there downsides for this?
Technically, no. But there are some possible points you have to keep in mind.
When we talk about Microservices we see them as independent services running on their own with as few dependency as possible on other services, when we talk about applications we kinda go against this 'law', because we have to deploy them together, we shouldn't see applications that way, because the applications is just a logical isolation for these services, so where is the benefit on SF applications?
When you have multiple services deployed (dependent or not on each other), you need a way to keep track of them as a bigger unit, otherwise you might end with:
a cluster full of services that sometimes are not required anymore, and is just there because we 'might be using them' or someone forgot to remove when their peers got obsolete.
Dependent services missing on new deployments
Version of services not compatible with each other (contracts, APIs, and so on)
SF Applications works like a snapshot of these services, so for example, whenever a new service get updated, you also upgrade the application to reference the new definition of your services and their dependencies, this will tell SF "this is how I want my services running" and SF will manage to get them exactly as you described. Does not mean you have to update all of them when a upgrade is required, SF will do if you have to, but you can update just the ones that changed, and them deploy a version of your application that SF will manage the version of each service for you. An analogy, it is like a docker compose file where you specify the containers you have to deploy as a single deployment.
Given that, when you opt out of application concept, you loose these benefits, because now you have to manage every single service on their own and keep track of the versions they depend on, and in cases where two services on different applications need to be deployed together (because of breaking changes for example) you would not be able to easily rollback if one of them fail, because they are not dependent on each other anymore, so you would have to write your own logic to handle this.
A typical scenario you might find yourself in is where a new version of a service get updated and others not updated on same release might stop working, but for your deployment, the new service looks OK, without any error.
So, at the end, is just a trade off, you opt for more flexibility deploying your service, but end up with more maintenance.

Does SOA require a new instance of a webserver for each microservice?

Let's say I want a simple set of web services off of one domain:
User authentication
Projects datastore
Does this mean I would create 2 different databases, with 2 different instances of express/flask/etc, with 2 different servers running on 2 different ports?
In short no it does not require it, however you can do it like that if this is what you require.
Remember microservices allows you to create services in a polyglot fashion. For example you could host the user authentication in C++ and the projects in Java. However most developers feel that hosting every microservice on a different technology is overkill.
Microservices will typically share a persistent storage of some sort i.e. a common SQL/NoSQL database back end. They are typically hosted on the same server as well though they would be in a different process space potentially allowing you to make the services come and go without affecting the whole.
The micro part really refers to the business context and has nothing to do with technical side of things. So having every service on a separate database and server does not make it a "microservice".
A service that does both employee registration and customer registration is probably not a microservice if one considers that customers and employers are two entities that have life cycles of their own. An employee might be assigned to a customer but they should not share a service context.
Remember there is no right or wrong decisions in this. Just successful and unsuccessful SOA implementations.

Azure Apps - Distributed Architecture - 1 API Layer vs 2 API Layers - Design decisions

Background
Having run through the Getting started with API Apps and ASP.NET in Azure App Service tutorial (https://azure.microsoft.com/en-gb/documentation/articles/app-service-api-dotnet-get-started/), we had an architecture question arise today around the design decisions made to split out the To Do List Application API layers into a Middle tier API app and Data tier API app.
When approaching build of an application using a distributed architecture, what considerations should take place to understand when this type of separation should occur in your API layers?
Another way of asking this question is what are the pros and cons of having a separate middle tier API layer and data tier API app when building your application?
Other Questions
I had a read of Web apps architecture: 1 or n API question (see link that follows) which while being insightful, was slightly different to the question we are asking. We are talking a single domain which has separate API layers for middle tier (logic) and the data tier.
Web apps architecture: 1 or n API
This of course, depends. Deciding whether to build out what I call "infrastructure services" is very strongly dependent on your needs and your application(s).
Infrastructure tier services generally get much more re-use than business logic tier services. They are very easy to recompose into new applications. The most common instance of this is building an admin interface as a separate application.
If you have already build several applications in your organization, and have found reuse to be occurring regularly, then I would seriously contemplate infrastructure services. If your organization is writing it's first application, and you don't see this fanning out to additional interfaces, then maybe just isolate your data access in a DAO pattern, it's fairly straightforward to refactor it out to a stand-alone service later.
I think the example design is somewhat confusing. In real world, I have not seen such design yet because design looks like having every function to be http/rpc call?
My experience would be the SPA uses a Public API (or Gateway API) which then calls your Internal API / Microservices to aggregate results. It is your Microservices that may have DAOs and, most importantly, the business logic

Moving from conventional architecture to Azure

I am designing the architecture for an Azure application, and I have a few questions on how to proceed. I am familiar with the basics of Azure, but have never built and deployed an Azure application before. I have extensive experience with conventional non-cloud, web-hosted applications, though.
My application will be the usual database-centric business system with a web user interface. We want to start very small and grow it slowly as we gain user base. I am planning to use an SQL Azure database for relational storage as well as blob storage for documents and the like. These will be accessed by a Data Access Layer, which in turn will be operated by a Business Layer. The web user interface will be built using ASP.NET and will rest on the Business Layer.
All this is very traditional, but I wonder how well it fits with Azure. I have some specific and inter-related questions:
I see the Data Layer and Business Layer as part of an Application Tier that can be deployed on a worker role, whereas the web user interface can be deployed as a Front-End Tier on a web role. Is separating the business and presentation logic like this a wise decision on Azure?
Having said the above, having two separate roles wouldn't make sense while the user base is very small, so I would rather deploy everything together on a single web role until we get bigger. What do I need to do to make sure that these two tiers can be easily reconfigured to work as either one or two roles with any recoding?
The communication between the web user interface and the Business Layer must be fast; I am concerned that it won't be very fast especially when these two are deployed as separate tiers on different web/worker roles. What is the best communications mechanism in Azure that I should use? I have considered queue storage, service bus and virtual network, but I am not sure how to make a decision here.
I have been reading some best practices posts and documents online, but they seem to address advanced issues. I would rather like to have answers to these quite basic concerns in the form of pointers to best practices articles or the like. Thank you.

Resources