Docker Microservice Architecture - Communication between different containers - azure

I've just started working with docker and I'm currently trying to work out how to setup a project using microservice architecture.
My goal is to move out different services from the api and instead have each one in their own container.
Current architecture
Desired architecture
Questions
How does the API gateway communicate with the internal services? Should all microservices have their own API which only accept communication from the API gateway? Any other means of communications?
What would be the ideal authentication between the gateway and the microservices? JWT token? Basic Auth?
Do you see any problems with this architecture if hosted in Azure?
Is integration testing even possible in the desired architecture? For example, I use EF SQlite inmemory for integration testing and its easily accessible within the api, but I don't see this working if the database is located in it's own container.
Anything important here that i've missed?

I had created an application that is completely a micro service based architecture running on AWS ECS(Container Service), Each microservice is pushed on container as Docker image. There are 2 instances of EC2 are running for achieving High Availability and same mircoservices are running on both instances so if one instance goes down another can take care of requests.
each microservice use its own database and inter microservice communication is happening using client registry on HTTP protocol and discovery, Spring Cloud Consul and Netflix Eureka can be used for service discovery and registery.
.
Please find the diagram below :

Related

How do make my microservices only accessible by the api gateway

I would like to know how I can protect my Nodejs microservices so only the API gateway can access it. Currently the microservices are exposed on a unique port on my machine and can be access directly without passing through the gateway. That defeats the purpose of the gateway to serve as the only entry point in the system for secure and authorized information exchange.
The microservices and the gateway are currently built with Nodejs and express.
The plan is to eventually deploy it on the cloud (digital ocean). I'd appreciate any response. Thanks.
Kubernetes can solve this problem.
Kubernetes manages containers where each container can be a micro service.
While connecting your micro services to your gateway server, you can choose to only allow foreign connections to your gateway server. You would have a load balancer / nginx in your kubernetes cluster that redirects request to your gateway server.
Kubernetes has many other features such as:
service discovery: each of your micro service's IP could potentially change on restart/deployment unless you have static IP for all ur services. service discovery solves this problem.
high availability & horizontal scaling & zero downtime: you can configure to have several replicas for each of your service. So when one of the service goes down there still are other replicas alive to deal with the remaining requests. This also helps with CICD. With something like github action, you can make a smooth CICD pipeline. When you deploy a new docker image(update a micro service), kubernetes will launch a new container first and then kill the old container. So you have zero down time.
If you are working with micro services, you should definitely have a deep dive into kubernetes.

Deploy and secure microservices communication - best practices

I have 2 microservices each connected to a db and two APIs and I was asking myself what are the best practices about deploying them in production.
Firstly, I thought about putting all the code on the same server. Each microservice will be in a docker container and same with the APIs. I will not use docker for databases(Mongo and MariaDB), instead they will be installed directly on the server. Only the port of the APIs will be opened and they will communicate with the microservices via TCP.
Secondly I was wondering how can I secure the microservices or if is needed. The validation logic will be only on the APIs.
Thank you!
Some food for thoughts:
Deploy several replicas of the same container to increase reliability and resilience. To manage load balancing and replicas you could use for instance kubernetes
If your containers communicate between each other, you could think about enabling mTLS.
Expose only the needed ports.
If you deploy them in your own server pay attention to the maintenance of the underlying operating system.

Microservice design with Kubernetes - API gateway, communication, service discovery and db issues

Recently I have been researching about microservices and kubernetes. All the tutorial and article I read online talks about general staff. I have several specific questions about building a microservices app on kubernetes.
API gateway: Is API gateway a microservice I built for my app that can automatically scale? Or is it already a built-in function of kubernetes? The reason I ask is because a lot of the articles are saying that load-balancing is part of the API gateway which confuse me since in kubernetes, load-balancing is handled by service. Also, is this the same as the API gateway on AWS, why don't people use the AWS API gateway instead?
Communication within services: from what I read only, there are Rest/RPC way and Message queue way. But why do people say that the Rest way is for sync operation? Can we build the services and have them communicate with rest api with Nodejs async/await functions?
Service Discovery: Is this a problem with kubernetes at all? Does kubernetes automatically figure out this for you?
Databases: What is the best practice to deploy a database? Deploy as a microservice on one of the node? Also, some articles say that each service should talk to a different db. So just separate the tables of one db to several dbs?
Is API gateway a microservice I built for my app that can
automatically scale? Or is it already a built-in function of
kubernetes?
Kubernetes does not have its own API-gateway service. It has an Ingress controller, which operates as a reverse proxy and exposes Kubernetes resources to the outside world. And Services, which load-balance traffic between Pods linked to them.
Also, Kubernetes provides an auto-scaling according to the resources consumed by Pods, memory usage or CPU utilization and some custom metrics. It is called Horizontal Pod Autoscaler, and you can read more about it here or in the official documentation.
Service Discovery: Is this a problem with kubernetes at all? Does kubernetes automatically figure out this for you?
Service Discovery is not a problem in Kubernetes, it has an entity called Services responsible for this. For more information, you can look through the link.
Your other questions refer more to the architecture of your application.

App to App communication in Cloud Foundry

Assume you want to deploy 2 apps of which one provides some API to the second application.
With services I'd just bind the service (or declare it as dependency in my manifest) to my application and hence get the information regarding host, port and credentials passed to my application (e.g. via env variables in node.js). Is there a similiar mechanism for application to application "communication"?
So far my approach is to use a RabbitMQ service (or any message broker/queue) which both applications are bound to and which I then use for cross-app communication.
Thanks!
Using a message broker, as you do, is definitely a viable solution. This allows for asynchronous communication. Yet you will have to take care of authentication yourself, as opposed to app <-> service communication, where authentication/authorization is established through through cloudfoundry service binding.
Another way would be to use a service registry for this. Both apps would register with the service registry and be able to discover each other.
You could try spring cloud service registry (Eureka) or consul. As for your message broker solution, this will not generate credentials for your apps, as a cloudfoundry service binding does.
From your use case, for microservice to microservice discovery, you need Spring Cloud Services and Eureka.
I don't have much experience on nodejs. But some googling, will give you some articles. Here's one that may help you - https://www.npmjs.com/package/eureka-js-client
This article will give you an overview from Java and Spring perspective - https://spring.io/guides/gs/service-registration-and-discovery/.

Microservices Architecture in NodeJS

I was working on a side project and i deiced to redesign my Skelton project to be as Microservices, so far i didn't find any opensource project that follow this pattern. After a lot of reading and searching i conclude to this design but i still have some questions and thought.
Here are my questions and thoughts:
How to make the API gateway smart enough to load balnce the request if i have 2 node from the same microservice?
if one of the microservice is down how the discovery should know?
is there any similar implementation? is my design is right?
should i use Eureka or similar things?
Your design seems OK. We are also building our microservice project using API Gateway approach. All the services including the Gateway service(GW) are containerized(we use docker) Java applications(spring boot or dropwizard). Similar architecture could be built using nodejs as well. Some topics to mention related with your question:
Authentication/Authorization: The GW service is the single entry point for the clients. All the authentication/authorization operations are handled in the GW using JSON web tokens(JWT) which has nodejs libray as well. We keep authorization information like user's roles in the JWT token. Once the token is generated in the GW and returned to client, at each request the client sends the token in HTTP header then we check the token whether the client has the required role to call the specific service or the token has expired. In this approach, you don't need to keep track user's session in the server side. Actually there is no session. The required information is in the JWT token.
Service Discovery/ Load balance: We use docker, docker swarm which is a docker engine clustering tool bundled in docker engine (after docker v.12.1). Our services are docker containers. Containerized approach using docker makes it easy to deploy, maintain and scale the services. At the beginning of the project, we used Haproxy, Registrator and Consul together to implement service discovery and load balancing, similar to your drawing. Then we realized, we don't need them for service discovery and load balancing as long as we create a docker network and deploy our services using docker swarm. With this approach you can easily create isolated environments for your services like dev,beta,prod in one or multiple machines by creating different networks for each environment. Once you create the network and deploy services, service discovery and load balancing is not your concern. In same docker network, each container has the DNS records of other containers and can communicate with them. With docker swarm, you can easily scale services, with one command. At each request to a service, docker distributes(load balances) the request to a instance of the service.
Your design is OK.
If your API gateway needs to implement (and thats probably the case) CAS/ some kind of Auth (via one of the services - i. e. some kind of User Service) and also should track all requests and modify the headers to bear the requester metadata (for internal ACL/scoping usage) - Your API Gateway should be done in Node, but should be under Haproxy which will care about load-balancing/HTTPS
Discovery is in correct position - if you seek one that fits your design look nowhere but Consul.
You can use consul-template or use own micro-discovery-framework for the services and API-Gateway, so they share end-point data on boot.
ACL/Authorization should be implemented per service, and first request from API Gateway should be subject to all authorization middleware.
It's smart to track the requests with API Gateway providing request ID to each request so it lifecycle could be tracked within the "inner" system.
I would add Redis for messaging/workers/queues/fast in-memory stuff like cache/cache invalidation (you can't handle all MS architecture without one) - or take RabbitMQ if you have much more distributed transaction and alot of messaging
Spin all this on containers (Docker) so it will be easier to maintain and assemble.
As for BI why you would need a service for that? You could have external ELK Elastisearch, Logstash, Kibana) and have dashboards, log aggregation, and huge big data warehouse at once.

Resources