I have a service using Azure Kubernetes cluster and AKS load balancer.
I want to forward some HTTP(client) requests to all instances.
is there any way to configure this behavior using AKS or Kubernetes in general?
Say I have XYZ API running two replicas/instances.
XYZ-1 pod instance
XYZ-2 pod instance
I have some rest API requests to the app domain.com/testendpoint
Currently, using AKS load balancer it sends the requests in a round-robin fashion to XYZ-1 and XYZ-2. I am looking to see if it is possible to forward a request to both instances (XYZ-1 and XYZ-2) when the request endpoint is testendpoint and all other API requests use the same round-robin order.
The use case to refresh a service in-memory data via a rest call once a day or twice and the rest call will be triggered by another service when needed. so want to make sure all pod instances update/refresh in-memory data by an HTTP request.
if it is possible to forward a request to both instances (XYZ-1 and XYZ-2) when the request endpoint is testendpoint
This is not a feature in the HTTP protocol, so you need a purpose built service to handle this.
The use case to refresh a service in-memory data via a rest call once a day or twice and the rest call will be triggered by another service when needed. so want to make sure all pod instances update/refresh in-memory data by an HTTP request.
I suggest that you create a new utility service, "update-service" - that you send the call once a day to. This service then makes a request to every instance of XYZ, like XYZ-1 and XYZ-2.
I have always worked on mobile technologies and now I am stepping into backend systems, more specifically systems design. I keep coming across conflicting statements for the roles of api gateway and load balancer. Googling has only returned the same half a dozen results that mostly focus on the implementations of load balancer or api gateway service provided by some famous service. I will list here all the confusing I am facing, in hope someone can clarify all of them.
Sometimes, i come across that API Gateway is the single point of communication with client devices. On the other hand, some places mention that 'request goes to load balancer, which spreads it across servers equally'. So what is correct? API Gateway receives requests or load balancer?
Other places, when I googled the topic, say that the two are totally different. I've understood that API Gateway does a lot of stuff, like SSL termination, logging, throttling, validation, etc, but it also does load balancing. So API Gateway is a load balancer itself, equipped with other responsibilities?
On the topic, I want to understand if load balancer distribute load among servers of the same cluster or across different data centers or clusters? And what about API Gateway?
What is so specific to api gateway that it is a choice by default for micro-service architecture? Where are API gateways hosted? A DNS resolves domain name to a load balancer or api gateway?
As it might be clear, I am totally confused. In what systems does a load balancer benefit more than API Gateway, if the question is correct.
API Gateway and Load Balancer are 2 different things.
Load Balancer -> Its a software which works at protocol or socket level (eg. tcp, http, or port 3306 etc.) Its job is to balance the incoming traffic by distributing it to the destinations with various logics (eg. Round robin).
I doesn't offer features such as authorisation checks, authentication of requests etc.
Whereas
API Gateway -> Its a managed service provided by various hosting companies to manage API operations to seamlessly scale the API infra.
It takes cares of the access control, response caching, response types, authorisation, authentication, request throttling, data handling, identifying the right destinations based on custom rules, and seamless scaling the backend.
Generally Managed API gateways by default comes with scalable infra, so putting them behind load balancer might not make sense.
About resolving the Domain, most likely always the DNS resolves to the load balancer, which in turn fetches the response from the API gateway service.
DNS -> Load Balancer -> API gateway -> Backend service
Hope I could explain and clear your confusion.
API gateway predominately does API management and provides various other key features such as IAM (Identity and Access Management), Rate limiting, circuit breakers. Hence, it mainly eliminates the need to implement API-specific code for functionalities such as security, caching, throttling, and monitoring for each of the microservice. Microservices typically expose the REST APIs for use in front ends, other microservices and 3rd party apps with help of API gateway.
However, normally, the API Management does not include load balancing function, so it should be used in conjunction with a load balancer to achieve the same.
In system architecture based on Azure, there is Azure Application Gateway which is a load balancer that runs on Layer 7 and provides more features than traditional load balancer ( Layer 4 ) in terms of routing traffic using routing decisions based on additional attributes of HTTP request or content of traffic. This can also be termed as an application load balancer. It shall be used in conjunction by Azure API Management (API gateway). Azure has a Traffic Manager for operating at DNS level which uses DNS to direct client requests to the most appropriate service endpoint based on a traffic-routing method and the health of the endpoints. Traffic manager also uses the rules configured at the DNS level and enables dstribution of the the load over multiple regions and data centers. Within every region or data center, there shall be application gateways coupled with load balancers such that, the application gateways shall help in determining the application server to fetch response from and the load balancer shall help in load balancing.
System overview based on Azure :
Here are few related references:
Azure Application Gateway - https://learn.microsoft.com/en-us/azure/application-gateway/application-gateway-introduction
Azure Load Balancer- https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-overview
Azure Traffic Manager - https://learn.microsoft.com/en-us/azure/traffic-manager/traffic-manager-overview
Scenario Architecture - https://learn.microsoft.com/en-us/azure/traffic-manager/traffic-manager-load-balancing-azure
There are two scenarios to consider here to clarify the confusion. I have clarified this using microservices example as this would make sense there only.
Scenario 1: You have a cluster of API Gateways
User ---> Load Balancer (provided by Cloud Providers like AWS or your own) ---> API Gateway Cluster ---> Service Discovery Agent (like eureka) ---> Microservice A ---> Client Side Load Balancer ---> Microservice B
Scenario 2: You have a single API Gateway
User ---> API Gateway ---> Service Discovery Agent (like Eureka) ---> Microservice A ---> Client Side Load Balancer -> Microservice B
I hope you understand why we required Load Balancer before the API Gateway in Scenario 1, as there we had multiple instances of API gateway also to handle the large traffic and to avoid the burden on the single api gateway since gateway itself can have several tasks to manage as per the requirements, so to distribute the load among them, we have load balancer.
Inside the microservices environment, you may need load balancing concept at multiple places. Like to accept the outside network you will maintain a load balancer provided by Cloud provider like AWS, eureka (Service Discovery) also acts like a load balancer if there are multiple instances of same service are registered with it and at last we also have client side load balancing (each microservice has its own client side load balancer that maintains a local cache) when your microservices are trying to communicate within them just to avoid burden on service discovery agent(eureka) by not checking every time with it the address for the other microservices.
Note: In the flow diagram, please don't confuse the path from API
Gateway --> Service Discovery --> to Microservice as if the Gateway is
forwarding request to Service Discovery that later routes it forward.
Gateway is just checking for the services registry from the Discovery
agents and then routing it to the correct microservice (Not through
the Service Discovery agent)
Load Balancer :
Its main purpose is to distribute by load balancing traffic between multiple back end systems.
We can configure different routes for different back end systems.
We get a static ip address for the load balances end points (usually not available with API gateways)
Can configure health checks (usually not available with API gateways)
In case of cloud providers, usually "Pay for idle as well"
API Gateway :
This as well routes traffic to back end systems based on URL
BUT, its main purpose is targeted towards "API management".
Below are such key features which are usually not available in "Load Balancers",
Can implement rate limiting, bursting limits
Can do request validation and request/response mapping
Usually cloud API gateways allows to export/import cross API platform using swagger spec
Allows caching responses
In case of cloud providers, usually "Pay per use"
DNS is responsible for routing the request to the nearest ip address inside network for a given domain name.
Api gateway is responsible for authentication, finding the correct apis(with or without load balancer) to call and circuit braking, response consolidation.
Load balancer is resposnible for distributing incoming request to different machine having same service deployed on them, on the basis of load or maybe round robin fashion.
So one way of doing it is
DNS TO GATEWAY TO LB
NOTE : LB can be placed before gateway depending upon traffic and use case
We are splitting rest api monolith into microservices and we encountered following issue
Let us assume the following
project is asp.net mvc, hosted on iis
project is hosted on dedicated server (not a cloud)
monolith rest api had urls defined such as www.domain.com/orders/ www.domain.com/tickets/ etc
When splitting monolith to microservice, ideal situation would be that we end up with
ms1 -> www.domain.com/orders/
ms2 -> www.domain.com/tickets/
Since microservices do not usually correlate with resources this could get messy, for example one microservice could serve more then one resource, or on resource could be served by more then one microservice.
fe.
www.domain.com/tickets/ (ms1)
www.domain.com/tickets/reports (ms2)
or
www.domain.com/tickets (ms1)
www.domain.com/orders (ms1)
What would be solution for this?
Use IIS rewrite to match resource with microservice
fe. GET www.domain.com/tickets/5 via iis rewrite to call -> ticketsms.domain.com/tickets/5
Use API gateway to route request to proper microservice endpoint
fe.
GET www.domain.com/tickets/5 via API gateway to call -> ticketsms.domain.com/tickets/5
So basically the primary goal is to have full rest api apear like
GET www.domain.com/tickets/5
GET www.domain.com/orders/5
POST www.domain.com/orders/
GET www.domain.com/invoices/100
etc
So are iis rewrite and api gateway are the only two options here?
Should microservices be exposed directly to the clients, or should they go trought API gateway. Is API gateway overkill in this scenario?
The API gateway definitely adds value - what you split is your implementation into smaller units (the microservices) - it allows you to manage, monitor and govern the API interfaces centrally as a single unit, separately from the implementation(s).
Exposing individual microservice endpoints might not be easy to manage - for instance, you would need to apply access control policies separately to each.
I was working on a side project and i deiced to redesign my Skelton project to be as Microservices, so far i didn't find any opensource project that follow this pattern. After a lot of reading and searching i conclude to this design but i still have some questions and thought.
Here are my questions and thoughts:
How to make the API gateway smart enough to load balnce the request if i have 2 node from the same microservice?
if one of the microservice is down how the discovery should know?
is there any similar implementation? is my design is right?
should i use Eureka or similar things?
Your design seems OK. We are also building our microservice project using API Gateway approach. All the services including the Gateway service(GW) are containerized(we use docker) Java applications(spring boot or dropwizard). Similar architecture could be built using nodejs as well. Some topics to mention related with your question:
Authentication/Authorization: The GW service is the single entry point for the clients. All the authentication/authorization operations are handled in the GW using JSON web tokens(JWT) which has nodejs libray as well. We keep authorization information like user's roles in the JWT token. Once the token is generated in the GW and returned to client, at each request the client sends the token in HTTP header then we check the token whether the client has the required role to call the specific service or the token has expired. In this approach, you don't need to keep track user's session in the server side. Actually there is no session. The required information is in the JWT token.
Service Discovery/ Load balance: We use docker, docker swarm which is a docker engine clustering tool bundled in docker engine (after docker v.12.1). Our services are docker containers. Containerized approach using docker makes it easy to deploy, maintain and scale the services. At the beginning of the project, we used Haproxy, Registrator and Consul together to implement service discovery and load balancing, similar to your drawing. Then we realized, we don't need them for service discovery and load balancing as long as we create a docker network and deploy our services using docker swarm. With this approach you can easily create isolated environments for your services like dev,beta,prod in one or multiple machines by creating different networks for each environment. Once you create the network and deploy services, service discovery and load balancing is not your concern. In same docker network, each container has the DNS records of other containers and can communicate with them. With docker swarm, you can easily scale services, with one command. At each request to a service, docker distributes(load balances) the request to a instance of the service.
Your design is OK.
If your API gateway needs to implement (and thats probably the case) CAS/ some kind of Auth (via one of the services - i. e. some kind of User Service) and also should track all requests and modify the headers to bear the requester metadata (for internal ACL/scoping usage) - Your API Gateway should be done in Node, but should be under Haproxy which will care about load-balancing/HTTPS
Discovery is in correct position - if you seek one that fits your design look nowhere but Consul.
You can use consul-template or use own micro-discovery-framework for the services and API-Gateway, so they share end-point data on boot.
ACL/Authorization should be implemented per service, and first request from API Gateway should be subject to all authorization middleware.
It's smart to track the requests with API Gateway providing request ID to each request so it lifecycle could be tracked within the "inner" system.
I would add Redis for messaging/workers/queues/fast in-memory stuff like cache/cache invalidation (you can't handle all MS architecture without one) - or take RabbitMQ if you have much more distributed transaction and alot of messaging
Spin all this on containers (Docker) so it will be easier to maintain and assemble.
As for BI why you would need a service for that? You could have external ELK Elastisearch, Logstash, Kibana) and have dashboards, log aggregation, and huge big data warehouse at once.
Hi We have a UI component deployed to Bluemix on Noedjs which makes REST service calls (JSON/XML) to services deployed in Data-center. These calls will go through the IBM Data Power gateway as a security proxy.
Data Power establishes an HTTPS Mutual Authentication connection (using certs that are exchanged offline) to the caller.
Although this method is secure it is time consuming to set up and if this connection is in setup for each service request it will create a slow response for the end user.
To optimize response time we are looking for any solution which can pool connections between nodejs app deployed on Bluemix and DataPower security proxy. Any one has any experience in this area?
In regards to "it is time-consuming to set up", in datapower you can create a multi-protocol gateway (MPGW) in front of your services to act as router. The MPGW will match services calls based on their URI and route them accordingly. In this scenario, you will only need to configure a single endpoint in the Bluemix Cloud Integration service in order to work with all your services. One downside to this approach is that it will be harder to control access to specific on-premise services because they will all be exposed to your Bluemix app as a single service.
In regards to optimizing response times, where are you seeing the bottleneck?
If the establishment of the tcp connections is causing too much overhead, you should be able to configure your Node.js app to use or re-use persistent connections via keepalive settings or you can look into setting up a connection pool that manages that for you (e.g. https://www.npmjs.com/package/generic-pool seems a popular choice).
On the datapower side, make sure the front/back persistent timeout is set according to your requirements:http://www-01.ibm.com/support/knowledgecenter/SS9H2Y_7.2.0/com.ibm.dp.doc/mpgw_availableproperties_serviceview.html?lang=en
Other timeout values in datapower can be found at http://www-01.ibm.com/support/docview.wss?uid=swg21469404