For the sake of question, let's say i have 2 microservices.
Identity management
Accounting
I know that each microservice should not be tightly coupled and it should have it's own database.
Let's say that accounting has invoices and each invoice has issuing agent.
Agent from accounting also exists as User in Identity microservice.
If i understood well, data from identity management (users), should be copied to accounting (agents), and should copy only data which are needed for that bounded context (first and last name), so the invoice can have proper issuingAgentId.
Is this correct way to keep data consistent and shared between contexts?
Each time when user is created in identity microservice, event "UserCreated" will be published and accounting or any other service interested in this event should listen and process it by adding corresponding agent?
Same goes for updating user information.
This is one way to handle it yes and usually the preferred method. You keep a cache locally in your service that holds copies of the data from another service. In an event-driven system, this would involve listening to events of interest and using them to update your local cache. The cache could be in-memory, or persisted. An example for your use case would be when raising an invoice, the Accounting context would look in it's local cache for a user/agentid before creating the Invoice.
Other options:
Shared database
I know it is frowned upon (for good reason) but you can always share a database schema. For example, the Identity context can write to a user table and the Accounting context can read from it when it needs an AgentId to put in an invoice. The trade-off is you are coupling at the database level, and introducing a single point of failure.
RPC
You can make a RPC call to another service when you need information. In your example, the Accounting context would call the Identity Management context for the AgentId/User information before raising an invoice. Trade-off with this approach is again a coupling to the other service. What do you do when it is not available? You cannot raise an Invoice.
Reporting domain
Another option is to have a completely separate service that listens for data from other services and maintains view models for UIs. This would keep your other services ignorant of other service's concerns. When using an event-driven system, you'd be listening for events form other services that allow you to build a view model for the UI. This is usually a good option if all you are doing is viewing the data
Related
First of all:
There are only two hard things in Computer Science: cache invalidation and naming things. See full post
In our application, we use Redis as an in-memory cache server. We store customer information with composite key customer_CustomerGUID. Our strategy is:
For example the Customer table.
We have some endpoints where we provide data from the cache (suppose GetCustomerInformation).
In our business codes, when we update customer information, we invalidate the cache value of that specific customer.
In this way, we can serve the latest data from the cache every time we update in real-time.
Now Problem is, the code base is growing and developers also. So new developers or any developer often forget to invalidate customer cache when update customer information and there are a lot of places from where customer information is being updated. Furthermore, invalidate cache code segment should not be in business class from the design perspective(I guess).
We thought several approaches like adding an interceptor when we call SaveChanges with EF (we use entity framework), adding HandlerAttribute on those endpoints which potential to change customer information or other approaches also. But none of those is convincing from a simplicity perspective.
Our whole application resides on Azure Services. We are deciding to use Azure Logic Apps and Azure Functions to invalidate the cache. When customer information updates Azure Logic Apps will call an Azure Function and that azure function will invalidate the cache of that customer.
Is this approach is good enough to implement or Is there any other good approach in our situation.
Let's say I want a simple set of web services off of one domain:
User authentication
Projects datastore
Does this mean I would create 2 different databases, with 2 different instances of express/flask/etc, with 2 different servers running on 2 different ports?
In short no it does not require it, however you can do it like that if this is what you require.
Remember microservices allows you to create services in a polyglot fashion. For example you could host the user authentication in C++ and the projects in Java. However most developers feel that hosting every microservice on a different technology is overkill.
Microservices will typically share a persistent storage of some sort i.e. a common SQL/NoSQL database back end. They are typically hosted on the same server as well though they would be in a different process space potentially allowing you to make the services come and go without affecting the whole.
The micro part really refers to the business context and has nothing to do with technical side of things. So having every service on a separate database and server does not make it a "microservice".
A service that does both employee registration and customer registration is probably not a microservice if one considers that customers and employers are two entities that have life cycles of their own. An employee might be assigned to a customer but they should not share a service context.
Remember there is no right or wrong decisions in this. Just successful and unsuccessful SOA implementations.
I'm about to start a project that requires very fast response times and high availability, i have done a few service fabric projects before so i'm feeling pretty confident about that.
I'm currently leaning towards a specific design, based on stateful content services as main datasource with a single data persistance service saving to a database of some sort.
Read operations are done by web-api
Write operations are done by Azure service bus communication with Rebus as handler.
Content services
The content services are stateful services which on commit sends a message to the persistance service with the object saved in the reliable dictionary, serialized as json.
The content services them selves will be responsible for json deserialization in the event that they need to restore the data.
Restore scenarios could be when the entire dictionary for some reason is lost or when a reset message is put on the bus.
Persistance service
Recieves a message from the bus and stores the included entity, to a data store (Not yet decided, maybe table storage).
Serves an entire repository of data when a service need to reload data.
Only concerns itself with storing and retrieving data, no integrity checks
I'm really unsure about whether this is a feasible way of designing a system, that also has a high amount of user data.
what are your thoughts on this design?
I ended up pursuing this solution and it works quite well and performs very well but it needs extensive testing in order to make sure that everything works as expected.
I have been assigned to think of a layered microservices architecture for Azure Service Fabric. But my experience mostly been on monolithic architectures I can't come up with a specific solution.
What I have thought as of now is like...
Data Layer - This is where all the Code First entities resides along with DBContext.
Business Layer - This is where all the Service Managers would be performing and enforcing the Business Logic i.e. UserManager (IUserManager), OrderManager (IOrderManager), InvoiceManager (IInvoiceManager) etc.
WebAPI (Self Hoted Inside Service Fabric) - Although this WebAPI is inside Service Fabric but does nothing except to receive the request and call respectic Services under Service Fabric. WebAPI Layer would also do any Authentication and Authorization (ASP.NET Identity) before passing on the call to other services.
Service Fabric Services - UserService, OrderService, InvoiceService. These services are invoked from WebAPI Layer and DI the Business Layer (IUserManager, IOrderManager, IInvoiceManager) to perform it's operation.
Do you think this is okay to proceed with?
One theoretical issue though, while reading up for several microservices architecture resources, I found that, all of them suggests to have Business Logic inside the service so that the specific service can be scaled independently. So I believe, I'm violating the basic aspect of microservices.
I'm doing this because, the customer requirement is to use this Business Layer across several projects, such as Batch Jobs (Azure Web Jobs), Backend Dashboard for Internal Employees (ASP.NET MVC) etc. So If I don't keep the Business Layer same, I have to write the same Business Logic again for Web Jobs and Backend Dashboard which I feel is not a good idea. As a simple change in Business Logic would require change in code at several places then.
One more concern is, in that case, I have to go with Service to Service communication for ACID transactions. Such as, while creating an Order, a Order and Invoice both must be created. So in that case, I thought of using Event Driven programming i.e. Order Service will emit an event which the Invoice Service can subscribe to, to create Invoice on creation of Order. But the complications are if the Invoice Service fails to create invoice, it can either keep trying do that infinitely (which is a bad idea I think), or emit another event to Order Service to subscribe and roll back the order. There can be lots of confusion with this.
Also, I must mention that, we are using a Single Database as of now.
So my questions are...
What issue do you see with my approach? Is it okay?
If not, please suggest me a better approach. You can guide me to some resources for implementation details or conceptual details too.
NOTE : The requirement of client is, they can scale specific module in need. Such as, UserService might not be used much as there won't be many signups daily or change in User Profile, but OrderService can be scaled along as there can be lots of Orders coming in daily.
I'll be glad to learn. As this is my first chance of getting my hands on designing a microservices architecture.
First of all, why does the customer want to use Service Fabric and a microservices archtecture when it at the same time sounds like there are other parts of the solution (webjobs etc) that will not be a part of thar architecture but rather live in it's own ecosystem (yet share logic)? I think it would be good for you to first understand the underlying requirements that should guide the architecture. What is most imortant?
Scalability? Flexibility?
Development and deployment? Maintinability?
Modularity in ability to compose new solutions based on autonomous microservices?
The list could go on. Until you figure this out there is really no point in designing further as you don't know what you are designing for...
As for sharing business logic with webjobs, there is nothing preventing you from sharing code packages containing the same BL, it doesn't have to be a shared service and it doesn't mean that it has to be packaged the same way in relation to its interface or persistance. Another thing to consider is, why do you wan't to run webjobs when you can build similar functionality in SF services?
I have 3 separated services using different databases with REST interfaces:
First service: Information about Customers
Second service: Information about Customer Trades
Third service: Information about Customer Documentation
Problem:
Every customer has a Status that should be evaluated based on his trades and documents.
Which service should be responsible for this evaluation and how should I implement the orchestration between the other services?
If you can, I'd create a 4th service. This way you have a service that returns what you need, avoiding the problem (and over chattiness) of calling 2 services and merging the result set. Otherwise, if you don't have access to be able to create a 4th service, maybe write a proxy service that through one call, calls the other 2 services and uses data caching to cache data where possible, to try help cut down on multiple calls in the future for commonly queried customers.