Sping Boot Microservice - node.js

1st approach
Implement the user profile in every micro service.
2nd approach: user profile service
Implement the user profile check in a single micro service.
What are other factors I might consider when making a decision? What would you do?

Actually you haven't mentioned yet another approach which I actually can recommend to consider:
Introduce the gateway - a special service that will take care of authorization / authentication between the "outer word" and your backend services:
Client ---> Gateway -----> Service 1
|-----> Service 2
...
It will be impossible to directly access Service1, 2, etc from the "Outer world" directly, only the gateway will be exposed, it will also take care of routing.
On the other hand, all the requests coming to the backend will be considered to be already authorized (might have additional headers with the "verified" roles list, or use some "standard" technology like JWT)
Besides the separation of concerns (backend services "think" only about the business logic implementation), this approach has the following benefits:
All the logic in one place, easy to fix, upgrade, etc. For example, your first approach might suffer from more advanced ecosystem (what if Services are written in different languages, using different frameworks, etc) - you'll have to re-implement the AuthZ in different technology stacks.
The user is not "aware" of all the variety of the services ( only the gateway is an entry-point, the routing is done in the gateway).
Doesn't have "redundant" calls (read CPU / Memory / IO) by backend services for the authZ. Compare with the second presented approach - you'll have to call external service upon each request.
You can scale the authZ service (gateway) and backend services separately.
For example, if you introduce new service you don't have to think how much overhead it will introduce to your AuthZ component (redis, database, etc). So you can scale it out only by business requirements

Related

How best to integrate a search service for multiple microservices with an API gateway

We have a bunch of microservices with an API gateway in front. We have created a search service that would search across the services and aggregate the data for the client app. We have two approaches to go about it:
Let the API gateway contact search service directly, giving it the context about which microservice to search in.
Let the API gateway contact each microservice directly (which we are already doing currently), and let each of the microservice handle contacting (or not) the search service by themselves.
I liked the 2nd approach because it would cleanly hide the search abstraction away from the API gateway, but since each microservice is responsible for contacting the search service, there might be search handling logic duplication across the services, which we would want to avoid since the number of services we have is huge.
What would you suggest would be the better pattern to go ahead with? Is there an even better architecture than these that I could leverage?
I don't know what is exactly the responsibility of the search service, but If, as I imagine, is a service that aggregates information of other services to show to the user I would handle this problem using one of these two approaches:
The search service is the one that calls periodically each of the microservices asking for the information that needs and processing it accordingly.
If the search service doesn't know when this information will be generated and you want to update it near real time without the cost of doing an intensive polling, I will use a message broker where the different microservices will produce events with the information and the search service will consume them and update its information accordingly
Either way, I wouldn't use the API gateway as an orchestrator to call the different microservices because the API gateway shouldn't know anything about the business logic, its responsibility is only to be a single point of entrance for outside calls and maybe other cross-cutting tasks as audit or authorization

Does SOA require a new instance of a webserver for each microservice?

Let's say I want a simple set of web services off of one domain:
User authentication
Projects datastore
Does this mean I would create 2 different databases, with 2 different instances of express/flask/etc, with 2 different servers running on 2 different ports?
In short no it does not require it, however you can do it like that if this is what you require.
Remember microservices allows you to create services in a polyglot fashion. For example you could host the user authentication in C++ and the projects in Java. However most developers feel that hosting every microservice on a different technology is overkill.
Microservices will typically share a persistent storage of some sort i.e. a common SQL/NoSQL database back end. They are typically hosted on the same server as well though they would be in a different process space potentially allowing you to make the services come and go without affecting the whole.
The micro part really refers to the business context and has nothing to do with technical side of things. So having every service on a separate database and server does not make it a "microservice".
A service that does both employee registration and customer registration is probably not a microservice if one considers that customers and employers are two entities that have life cycles of their own. An employee might be assigned to a customer but they should not share a service context.
Remember there is no right or wrong decisions in this. Just successful and unsuccessful SOA implementations.

Service-to-service communication for Cloud Foundry

I need to deploy two node services to CF (each service in its own container).
The apps need to communicate. How is it recommended to implement this communication? I can't find any guide which explains service-to-service communication in CF, and since it should deploy to the cloud I need some best practices. Some examples will be very helpful.
This is a classic question that always come to solve any enterprise application integration pattern and it comes down to the point that, what type of integration needs one has.
If an app want to have synchronous communication to get a real time response, RESTFul APIs are the most loved integration style of this age. But one also need to consider that, creation of huge numbers of APIs (which is the downside of going with Microservices based architecture) also brings-in the huge overhead of maintaining the set and locating the correct one. An API Gateway and a Service Discovery tools should be of help here. I am a novice about Blue-mix but you can surely host a Spring-Cloud-Eureka or Consul based Service Discovery on it to serve the purpose, and similarly Spring Cloud Zuul to have an API Gateway.
Another simple catch here is to ensure not to build one central service as fat spof to cater to whole of your microservices world but rather have many such services each catering to a contextually bound microservices.
On the similar line, if the need is to have async communication, message brokers such as - RabbitMQ, Kakfa should be the best and simplest integration style for apps to communicate. The same catch of not building a SPOF service but rather have separate service instances one each for a set of bounded microservices applies here as well, with all these instances being further federated for wider communication should be taken care of.
Your answer will depend on what kind of communication you want between your apps.
If you're looking to deploy a microservice-based architecture pattern for your Node services, i.e. server code that performs an independent, granular business function, I would recommend getting started reading the docs here and using the new Bluemix Developer Console.
Here there is a growing set of patterns and starters that you can use to understand and develop cloud native apps that can communicate to each other by exposing API endpoints compliant with the Open API specification and auto-generating SDKs for your omnichannel client applications.
After downloading the selected starter, you can modify the code to expose an API that performs the business logic that you need. Subsequently, you can run your project locally in a container or deploy it to Bluemix using the bx dev command line tool.
After setting that up, you will have cross platform, language independent communication between your microservices and client applications.

How To Design The Layers in Azure Service Fabric

I have been assigned to think of a layered microservices architecture for Azure Service Fabric. But my experience mostly been on monolithic architectures I can't come up with a specific solution.
What I have thought as of now is like...
Data Layer - This is where all the Code First entities resides along with DBContext.
Business Layer - This is where all the Service Managers would be performing and enforcing the Business Logic i.e. UserManager (IUserManager), OrderManager (IOrderManager), InvoiceManager (IInvoiceManager) etc.
WebAPI (Self Hoted Inside Service Fabric) - Although this WebAPI is inside Service Fabric but does nothing except to receive the request and call respectic Services under Service Fabric. WebAPI Layer would also do any Authentication and Authorization (ASP.NET Identity) before passing on the call to other services.
Service Fabric Services - UserService, OrderService, InvoiceService. These services are invoked from WebAPI Layer and DI the Business Layer (IUserManager, IOrderManager, IInvoiceManager) to perform it's operation.
Do you think this is okay to proceed with?
One theoretical issue though, while reading up for several microservices architecture resources, I found that, all of them suggests to have Business Logic inside the service so that the specific service can be scaled independently. So I believe, I'm violating the basic aspect of microservices.
I'm doing this because, the customer requirement is to use this Business Layer across several projects, such as Batch Jobs (Azure Web Jobs), Backend Dashboard for Internal Employees (ASP.NET MVC) etc. So If I don't keep the Business Layer same, I have to write the same Business Logic again for Web Jobs and Backend Dashboard which I feel is not a good idea. As a simple change in Business Logic would require change in code at several places then.
One more concern is, in that case, I have to go with Service to Service communication for ACID transactions. Such as, while creating an Order, a Order and Invoice both must be created. So in that case, I thought of using Event Driven programming i.e. Order Service will emit an event which the Invoice Service can subscribe to, to create Invoice on creation of Order. But the complications are if the Invoice Service fails to create invoice, it can either keep trying do that infinitely (which is a bad idea I think), or emit another event to Order Service to subscribe and roll back the order. There can be lots of confusion with this.
Also, I must mention that, we are using a Single Database as of now.
So my questions are...
What issue do you see with my approach? Is it okay?
If not, please suggest me a better approach. You can guide me to some resources for implementation details or conceptual details too.
NOTE : The requirement of client is, they can scale specific module in need. Such as, UserService might not be used much as there won't be many signups daily or change in User Profile, but OrderService can be scaled along as there can be lots of Orders coming in daily.
I'll be glad to learn. As this is my first chance of getting my hands on designing a microservices architecture.
First of all, why does the customer want to use Service Fabric and a microservices archtecture when it at the same time sounds like there are other parts of the solution (webjobs etc) that will not be a part of thar architecture but rather live in it's own ecosystem (yet share logic)? I think it would be good for you to first understand the underlying requirements that should guide the architecture. What is most imortant?
Scalability? Flexibility?
Development and deployment? Maintinability?
Modularity in ability to compose new solutions based on autonomous microservices?
The list could go on. Until you figure this out there is really no point in designing further as you don't know what you are designing for...
As for sharing business logic with webjobs, there is nothing preventing you from sharing code packages containing the same BL, it doesn't have to be a shared service and it doesn't mean that it has to be packaged the same way in relation to its interface or persistance. Another thing to consider is, why do you wan't to run webjobs when you can build similar functionality in SF services?

Azure Apps - Distributed Architecture - 1 API Layer vs 2 API Layers - Design decisions

Background
Having run through the Getting started with API Apps and ASP.NET in Azure App Service tutorial (https://azure.microsoft.com/en-gb/documentation/articles/app-service-api-dotnet-get-started/), we had an architecture question arise today around the design decisions made to split out the To Do List Application API layers into a Middle tier API app and Data tier API app.
When approaching build of an application using a distributed architecture, what considerations should take place to understand when this type of separation should occur in your API layers?
Another way of asking this question is what are the pros and cons of having a separate middle tier API layer and data tier API app when building your application?
Other Questions
I had a read of Web apps architecture: 1 or n API question (see link that follows) which while being insightful, was slightly different to the question we are asking. We are talking a single domain which has separate API layers for middle tier (logic) and the data tier.
Web apps architecture: 1 or n API
This of course, depends. Deciding whether to build out what I call "infrastructure services" is very strongly dependent on your needs and your application(s).
Infrastructure tier services generally get much more re-use than business logic tier services. They are very easy to recompose into new applications. The most common instance of this is building an admin interface as a separate application.
If you have already build several applications in your organization, and have found reuse to be occurring regularly, then I would seriously contemplate infrastructure services. If your organization is writing it's first application, and you don't see this fanning out to additional interfaces, then maybe just isolate your data access in a DAO pattern, it's fairly straightforward to refactor it out to a stand-alone service later.
I think the example design is somewhat confusing. In real world, I have not seen such design yet because design looks like having every function to be http/rpc call?
My experience would be the SPA uses a Public API (or Gateway API) which then calls your Internal API / Microservices to aggregate results. It is your Microservices that may have DAOs and, most importantly, the business logic

Resources