Secure communication of services within Service Fabric standalone cluster - security

I already have secured Service Fabric cluster (Client to Node and Node to Node) using the following reference doc
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-security
I need to get some help on setting up security for communication between each microservices within an application type inside Service Fabric.
For example I have this sample application with AngularJs front end and two stateful services, service A & service B
Q: What is the best practise to secure the calls from the AngularJS Front End to the Service A and from Service A to B etc.? (red arrows in the diagram below)
sample application scenario
Is there any reference document or book that I can refer

Related

Docker Microservice Architecture - Communication between different containers

I've just started working with docker and I'm currently trying to work out how to setup a project using microservice architecture.
My goal is to move out different services from the api and instead have each one in their own container.
Current architecture
Desired architecture
Questions
How does the API gateway communicate with the internal services? Should all microservices have their own API which only accept communication from the API gateway? Any other means of communications?
What would be the ideal authentication between the gateway and the microservices? JWT token? Basic Auth?
Do you see any problems with this architecture if hosted in Azure?
Is integration testing even possible in the desired architecture? For example, I use EF SQlite inmemory for integration testing and its easily accessible within the api, but I don't see this working if the database is located in it's own container.
Anything important here that i've missed?
I had created an application that is completely a micro service based architecture running on AWS ECS(Container Service), Each microservice is pushed on container as Docker image. There are 2 instances of EC2 are running for achieving High Availability and same mircoservices are running on both instances so if one instance goes down another can take care of requests.
each microservice use its own database and inter microservice communication is happening using client registry on HTTP protocol and discovery, Spring Cloud Consul and Netflix Eureka can be used for service discovery and registery.
.
Please find the diagram below :

How to discover Internal-only ASP.NET Core stateless service inside of Azure Service Fabric

I have ASP.NET Core API which should be visible only inside of Azure Service Fabric. When I follow the recommendation I would use Kestrel to host the application and let the Azure Service Fabric to dynamically assign the port.
How to discover the service inside of ASP.NET Core Web application and what is the preferred way? DNS service, Naming service or Reverse proxy?
In the provided example, running aspnet core api inside Service Fabric means having a Reliable Service, and exposing a KestrelCommunicationListener from it.
So you're simply hosting aspnetcore inside a service. (other example here)
Usually you don't access the hosting service from inside the controller.
You may want to access a different stateless service from a controller, in that case you can use SF remoting for minimal overhead.
Or if you must access a different aspnet core api running inside the cluster, a simple way to locate that api is by using the DNS based approach.
Note: Don't use the reverse proxy for this case, as it exposes all http based endpoints to the outside world.

App to App communication in Cloud Foundry

Assume you want to deploy 2 apps of which one provides some API to the second application.
With services I'd just bind the service (or declare it as dependency in my manifest) to my application and hence get the information regarding host, port and credentials passed to my application (e.g. via env variables in node.js). Is there a similiar mechanism for application to application "communication"?
So far my approach is to use a RabbitMQ service (or any message broker/queue) which both applications are bound to and which I then use for cross-app communication.
Thanks!
Using a message broker, as you do, is definitely a viable solution. This allows for asynchronous communication. Yet you will have to take care of authentication yourself, as opposed to app <-> service communication, where authentication/authorization is established through through cloudfoundry service binding.
Another way would be to use a service registry for this. Both apps would register with the service registry and be able to discover each other.
You could try spring cloud service registry (Eureka) or consul. As for your message broker solution, this will not generate credentials for your apps, as a cloudfoundry service binding does.
From your use case, for microservice to microservice discovery, you need Spring Cloud Services and Eureka.
I don't have much experience on nodejs. But some googling, will give you some articles. Here's one that may help you - https://www.npmjs.com/package/eureka-js-client
This article will give you an overview from Java and Spring perspective - https://spring.io/guides/gs/service-registration-and-discovery/.

Microservices Architecture in NodeJS

I was working on a side project and i deiced to redesign my Skelton project to be as Microservices, so far i didn't find any opensource project that follow this pattern. After a lot of reading and searching i conclude to this design but i still have some questions and thought.
Here are my questions and thoughts:
How to make the API gateway smart enough to load balnce the request if i have 2 node from the same microservice?
if one of the microservice is down how the discovery should know?
is there any similar implementation? is my design is right?
should i use Eureka or similar things?
Your design seems OK. We are also building our microservice project using API Gateway approach. All the services including the Gateway service(GW) are containerized(we use docker) Java applications(spring boot or dropwizard). Similar architecture could be built using nodejs as well. Some topics to mention related with your question:
Authentication/Authorization: The GW service is the single entry point for the clients. All the authentication/authorization operations are handled in the GW using JSON web tokens(JWT) which has nodejs libray as well. We keep authorization information like user's roles in the JWT token. Once the token is generated in the GW and returned to client, at each request the client sends the token in HTTP header then we check the token whether the client has the required role to call the specific service or the token has expired. In this approach, you don't need to keep track user's session in the server side. Actually there is no session. The required information is in the JWT token.
Service Discovery/ Load balance: We use docker, docker swarm which is a docker engine clustering tool bundled in docker engine (after docker v.12.1). Our services are docker containers. Containerized approach using docker makes it easy to deploy, maintain and scale the services. At the beginning of the project, we used Haproxy, Registrator and Consul together to implement service discovery and load balancing, similar to your drawing. Then we realized, we don't need them for service discovery and load balancing as long as we create a docker network and deploy our services using docker swarm. With this approach you can easily create isolated environments for your services like dev,beta,prod in one or multiple machines by creating different networks for each environment. Once you create the network and deploy services, service discovery and load balancing is not your concern. In same docker network, each container has the DNS records of other containers and can communicate with them. With docker swarm, you can easily scale services, with one command. At each request to a service, docker distributes(load balances) the request to a instance of the service.
Your design is OK.
If your API gateway needs to implement (and thats probably the case) CAS/ some kind of Auth (via one of the services - i. e. some kind of User Service) and also should track all requests and modify the headers to bear the requester metadata (for internal ACL/scoping usage) - Your API Gateway should be done in Node, but should be under Haproxy which will care about load-balancing/HTTPS
Discovery is in correct position - if you seek one that fits your design look nowhere but Consul.
You can use consul-template or use own micro-discovery-framework for the services and API-Gateway, so they share end-point data on boot.
ACL/Authorization should be implemented per service, and first request from API Gateway should be subject to all authorization middleware.
It's smart to track the requests with API Gateway providing request ID to each request so it lifecycle could be tracked within the "inner" system.
I would add Redis for messaging/workers/queues/fast in-memory stuff like cache/cache invalidation (you can't handle all MS architecture without one) - or take RabbitMQ if you have much more distributed transaction and alot of messaging
Spin all this on containers (Docker) so it will be easier to maintain and assemble.
As for BI why you would need a service for that? You could have external ELK Elastisearch, Logstash, Kibana) and have dashboards, log aggregation, and huge big data warehouse at once.

Connecting Azure Web App to Service Fabric

I'm considering Reliable Actors right now that's part of Service Fabric. I have an existing Web App that I'd like to keep and act as an API surface to my actors. The Web App will also handle authentication and authorization before any calls get to my actors.
I can't tell from the documentation, but is it possible to connect a Web App to Service Fabric? Additionally, is it possible to limit connections to Service Fabric so that it doesn't except any public connections? How would I go about setting this up, or is it even advisable to do something like this?
I know with Cloud Services, you can connect a Cloud Service to a Web App through a Virtual Network, so I'm at least familiar with that kind of setup.
You would do it the same way you do with cloud services - use a virtual network. Service Fabric is just a framework running on VMs in a cloud service.
Connection to the service fabric can be controlled through the loadbalancers in the cluster VNET. I would suggest that you integrate the WebApp into the Service fabric cluster VNET, to do that you will have to add a VNET gateway

Resources