Orchestrating different bounded contexts, whose responsibility is it? - domain-driven-design

I have 3 separated services using different databases with REST interfaces:
First service: Information about Customers
Second service: Information about Customer Trades
Third service: Information about Customer Documentation
Problem:
Every customer has a Status that should be evaluated based on his trades and documents.
Which service should be responsible for this evaluation and how should I implement the orchestration between the other services?

If you can, I'd create a 4th service. This way you have a service that returns what you need, avoiding the problem (and over chattiness) of calling 2 services and merging the result set. Otherwise, if you don't have access to be able to create a 4th service, maybe write a proxy service that through one call, calls the other 2 services and uses data caching to cache data where possible, to try help cut down on multiple calls in the future for commonly queried customers.

Related

How best to integrate a search service for multiple microservices with an API gateway

We have a bunch of microservices with an API gateway in front. We have created a search service that would search across the services and aggregate the data for the client app. We have two approaches to go about it:
Let the API gateway contact search service directly, giving it the context about which microservice to search in.
Let the API gateway contact each microservice directly (which we are already doing currently), and let each of the microservice handle contacting (or not) the search service by themselves.
I liked the 2nd approach because it would cleanly hide the search abstraction away from the API gateway, but since each microservice is responsible for contacting the search service, there might be search handling logic duplication across the services, which we would want to avoid since the number of services we have is huge.
What would you suggest would be the better pattern to go ahead with? Is there an even better architecture than these that I could leverage?
I don't know what is exactly the responsibility of the search service, but If, as I imagine, is a service that aggregates information of other services to show to the user I would handle this problem using one of these two approaches:
The search service is the one that calls periodically each of the microservices asking for the information that needs and processing it accordingly.
If the search service doesn't know when this information will be generated and you want to update it near real time without the cost of doing an intensive polling, I will use a message broker where the different microservices will produce events with the information and the search service will consume them and update its information accordingly
Either way, I wouldn't use the API gateway as an orchestrator to call the different microservices because the API gateway shouldn't know anything about the business logic, its responsibility is only to be a single point of entrance for outside calls and maybe other cross-cutting tasks as audit or authorization

Does SOA require a new instance of a webserver for each microservice?

Let's say I want a simple set of web services off of one domain:
User authentication
Projects datastore
Does this mean I would create 2 different databases, with 2 different instances of express/flask/etc, with 2 different servers running on 2 different ports?
In short no it does not require it, however you can do it like that if this is what you require.
Remember microservices allows you to create services in a polyglot fashion. For example you could host the user authentication in C++ and the projects in Java. However most developers feel that hosting every microservice on a different technology is overkill.
Microservices will typically share a persistent storage of some sort i.e. a common SQL/NoSQL database back end. They are typically hosted on the same server as well though they would be in a different process space potentially allowing you to make the services come and go without affecting the whole.
The micro part really refers to the business context and has nothing to do with technical side of things. So having every service on a separate database and server does not make it a "microservice".
A service that does both employee registration and customer registration is probably not a microservice if one considers that customers and employers are two entities that have life cycles of their own. An employee might be assigned to a customer but they should not share a service context.
Remember there is no right or wrong decisions in this. Just successful and unsuccessful SOA implementations.

How To Design The Layers in Azure Service Fabric

I have been assigned to think of a layered microservices architecture for Azure Service Fabric. But my experience mostly been on monolithic architectures I can't come up with a specific solution.
What I have thought as of now is like...
Data Layer - This is where all the Code First entities resides along with DBContext.
Business Layer - This is where all the Service Managers would be performing and enforcing the Business Logic i.e. UserManager (IUserManager), OrderManager (IOrderManager), InvoiceManager (IInvoiceManager) etc.
WebAPI (Self Hoted Inside Service Fabric) - Although this WebAPI is inside Service Fabric but does nothing except to receive the request and call respectic Services under Service Fabric. WebAPI Layer would also do any Authentication and Authorization (ASP.NET Identity) before passing on the call to other services.
Service Fabric Services - UserService, OrderService, InvoiceService. These services are invoked from WebAPI Layer and DI the Business Layer (IUserManager, IOrderManager, IInvoiceManager) to perform it's operation.
Do you think this is okay to proceed with?
One theoretical issue though, while reading up for several microservices architecture resources, I found that, all of them suggests to have Business Logic inside the service so that the specific service can be scaled independently. So I believe, I'm violating the basic aspect of microservices.
I'm doing this because, the customer requirement is to use this Business Layer across several projects, such as Batch Jobs (Azure Web Jobs), Backend Dashboard for Internal Employees (ASP.NET MVC) etc. So If I don't keep the Business Layer same, I have to write the same Business Logic again for Web Jobs and Backend Dashboard which I feel is not a good idea. As a simple change in Business Logic would require change in code at several places then.
One more concern is, in that case, I have to go with Service to Service communication for ACID transactions. Such as, while creating an Order, a Order and Invoice both must be created. So in that case, I thought of using Event Driven programming i.e. Order Service will emit an event which the Invoice Service can subscribe to, to create Invoice on creation of Order. But the complications are if the Invoice Service fails to create invoice, it can either keep trying do that infinitely (which is a bad idea I think), or emit another event to Order Service to subscribe and roll back the order. There can be lots of confusion with this.
Also, I must mention that, we are using a Single Database as of now.
So my questions are...
What issue do you see with my approach? Is it okay?
If not, please suggest me a better approach. You can guide me to some resources for implementation details or conceptual details too.
NOTE : The requirement of client is, they can scale specific module in need. Such as, UserService might not be used much as there won't be many signups daily or change in User Profile, but OrderService can be scaled along as there can be lots of Orders coming in daily.
I'll be glad to learn. As this is my first chance of getting my hands on designing a microservices architecture.
First of all, why does the customer want to use Service Fabric and a microservices archtecture when it at the same time sounds like there are other parts of the solution (webjobs etc) that will not be a part of thar architecture but rather live in it's own ecosystem (yet share logic)? I think it would be good for you to first understand the underlying requirements that should guide the architecture. What is most imortant?
Scalability? Flexibility?
Development and deployment? Maintinability?
Modularity in ability to compose new solutions based on autonomous microservices?
The list could go on. Until you figure this out there is really no point in designing further as you don't know what you are designing for...
As for sharing business logic with webjobs, there is nothing preventing you from sharing code packages containing the same BL, it doesn't have to be a shared service and it doesn't mean that it has to be packaged the same way in relation to its interface or persistance. Another thing to consider is, why do you wan't to run webjobs when you can build similar functionality in SF services?

microservices and bounded contexts

For the sake of question, let's say i have 2 microservices.
Identity management
Accounting
I know that each microservice should not be tightly coupled and it should have it's own database.
Let's say that accounting has invoices and each invoice has issuing agent.
Agent from accounting also exists as User in Identity microservice.
If i understood well, data from identity management (users), should be copied to accounting (agents), and should copy only data which are needed for that bounded context (first and last name), so the invoice can have proper issuingAgentId.
Is this correct way to keep data consistent and shared between contexts?
Each time when user is created in identity microservice, event "UserCreated" will be published and accounting or any other service interested in this event should listen and process it by adding corresponding agent?
Same goes for updating user information.
This is one way to handle it yes and usually the preferred method. You keep a cache locally in your service that holds copies of the data from another service. In an event-driven system, this would involve listening to events of interest and using them to update your local cache. The cache could be in-memory, or persisted. An example for your use case would be when raising an invoice, the Accounting context would look in it's local cache for a user/agentid before creating the Invoice.
Other options:
Shared database
I know it is frowned upon (for good reason) but you can always share a database schema. For example, the Identity context can write to a user table and the Accounting context can read from it when it needs an AgentId to put in an invoice. The trade-off is you are coupling at the database level, and introducing a single point of failure.
RPC
You can make a RPC call to another service when you need information. In your example, the Accounting context would call the Identity Management context for the AgentId/User information before raising an invoice. Trade-off with this approach is again a coupling to the other service. What do you do when it is not available? You cannot raise an Invoice.
Reporting domain
Another option is to have a completely separate service that listens for data from other services and maintains view models for UIs. This would keep your other services ignorant of other service's concerns. When using an event-driven system, you'd be listening for events form other services that allow you to build a view model for the UI. This is usually a good option if all you are doing is viewing the data

Windows Azure Cache with a multi tenant application

I have in development a multi tenant application that I am deploying to azure.
I would like to take advantage of the windows azure cache service as it looks like it will be a great performance improvement vs hitting the database for each call.
Lets say I have 2 tables . Businesses and Customers. A business can have multiple customers and the business table contains details about the business.
Business details don't change often but customer information is changing constantly for each of the different tenants.
I assume I need 2 named instances (1 for business details and 1 for customers)
Is 2 named caches enough or do I need separate these for each of the tenants? I think 2 would be ok as if I have to create separate for each it will get expensive pretty quickly.
Thank you.
Using different named caches is interesting if you have different cache requirements (Expiry policy, default TTL, Notifications, High Availability, ...).
In you case you could simply look at using different Regions per tenant:
Windows Azure Cache supports the creation and use of user-defined regions. A region is a subgroup for cached items. Regions also support the annotation of cached items with additional descriptive strings called tags. Regions support the ability to perform search operations on any tagged items in that region.
This would allow you to split your named cache (you would only need one), in regions per tenant holding the businesses and customers for that tenant. And if the businesses don't change that often, you can simple change the TTL for those items to 1, 2, .. hours.

Resources