Is it recommended to use application gateway to load balance database servers? - azure

I have 3 database servers in Azure , I want to load balance between them , For application servers I am using application gateway.
Now I am not sure which one ( application gateway or traditional load balancer) should I use for load balancing database servers.
Can anyone clear my confusion?

Application Gateway is a Layer 7 load balancer, which means it only works with web traffic (HTTP, HTTPS, WebSocket, and HTTP/2).
I believe a database server would expose a TCP endpoint, but not a web endpoint.
For this reason, you would need a traditional load balancer, which works on Layer 4.
https://learn.microsoft.com/en-us/azure/application-gateway/application-gateway-faq#how-do-application-gateway-and-azure-load-balancer-differ

Related

Azure Application Gateway - check health on subset of backend nodes

I have a service fabric cluster that hosts some number of identical applications. The application has two main components - a stateless service that hosts web api (it listens on unique port number) and an actor service.
In front of it there is an application gateway instance with multisite listeners to reach proper application instance based on the url. The scale set for the service faberic cluster is set as backend pool for the application gateway.
For each application I have separate http settings with a unique backend port to reach. One of the configuration options for a listener is a health probe that check the web api health, by default on each backend node.
There is no problem when the api is deployed on each node on the backend, but when the api is deployed only on subset of nodes, for the nodes without it the health probe reports this app as unhealthy.
Is there a supported way to configure the application gateway health probe to check health only on a subset of backend nodes. For apps running on a service fabric cluster like in my case it will be strongly desired.
I recommend that you use a reverse proxy on the cluster for this. You can use the built-in reverse proxy, or Traefik for this.
This ensures that all incoming traffic is routed to the services.
It does introduce an additional network hop, so there is a performance impact.

Azure Traditional Load Balancer VS Azure Application Gateway response latency?

Environment Details: I have an application hosted in two azure environments for two clients. The application contains ASP.NET web API backend and Angular Frontend. Both applications hosted on two web servers (Windows VM). I'm using LB in the first environment (Environment 1) and using AGW in the second environment (Environment 2).
Problem: The issue that I'm having is, environment 1 API request response time is faster than the environment 2. Below is the screenshot of the browser inspect window for the same request.
According to the timing tab, environment 1 has a fast response time than environment 2.
Question: My question is whether this response time difference due to using LB and AGW?
The biggest difference between Azure load balancer and Azure application gateway is that they works at the different layer at OSI Model. Azure Load Balancer is a high-performance, low-latency Layer 4 load-balancing service (inbound and outbound) for all UDP and TCP
protocols. This might make a quick request and response relatively.
Application Gateway provides application delivery controller (ADC) as
a service, offering various Layer 7 load-balancing capabilities. Use
it to optimize web farm productivity by offloading CPU-intensive SSL
termination to the gateway.
Azure Load Balancer is a high-performance, low-latency Layer 4
load-balancing service (inbound and outbound) for all UDP and TCP
protocols. It is built to handle millions of requests per second while
ensuring your solution is highly available. Azure Load Balancer is
zone-redundant, ensuring high availability across Availability Zones.
For more references:
Overview of load-balancing options in Azure
Azure — Difference between Azure Load Balancer and Application Gateway

Load balancer and API Gateway confusion

I have always worked on mobile technologies and now I am stepping into backend systems, more specifically systems design. I keep coming across conflicting statements for the roles of api gateway and load balancer. Googling has only returned the same half a dozen results that mostly focus on the implementations of load balancer or api gateway service provided by some famous service. I will list here all the confusing I am facing, in hope someone can clarify all of them.
Sometimes, i come across that API Gateway is the single point of communication with client devices. On the other hand, some places mention that 'request goes to load balancer, which spreads it across servers equally'. So what is correct? API Gateway receives requests or load balancer?
Other places, when I googled the topic, say that the two are totally different. I've understood that API Gateway does a lot of stuff, like SSL termination, logging, throttling, validation, etc, but it also does load balancing. So API Gateway is a load balancer itself, equipped with other responsibilities?
On the topic, I want to understand if load balancer distribute load among servers of the same cluster or across different data centers or clusters? And what about API Gateway?
What is so specific to api gateway that it is a choice by default for micro-service architecture? Where are API gateways hosted? A DNS resolves domain name to a load balancer or api gateway?
As it might be clear, I am totally confused. In what systems does a load balancer benefit more than API Gateway, if the question is correct.
API Gateway and Load Balancer are 2 different things.
Load Balancer -> Its a software which works at protocol or socket level (eg. tcp, http, or port 3306 etc.) Its job is to balance the incoming traffic by distributing it to the destinations with various logics (eg. Round robin).
I doesn't offer features such as authorisation checks, authentication of requests etc.
Whereas
API Gateway -> Its a managed service provided by various hosting companies to manage API operations to seamlessly scale the API infra.
It takes cares of the access control, response caching, response types, authorisation, authentication, request throttling, data handling, identifying the right destinations based on custom rules, and seamless scaling the backend.
Generally Managed API gateways by default comes with scalable infra, so putting them behind load balancer might not make sense.
About resolving the Domain, most likely always the DNS resolves to the load balancer, which in turn fetches the response from the API gateway service.
DNS -> Load Balancer -> API gateway -> Backend service
Hope I could explain and clear your confusion.
API gateway predominately does API management and provides various other key features such as IAM (Identity and Access Management), Rate limiting, circuit breakers. Hence, it mainly eliminates the need to implement API-specific code for functionalities such as security, caching, throttling, and monitoring for each of the microservice. Microservices typically expose the REST APIs for use in front ends, other microservices and 3rd party apps with help of API gateway.
However, normally, the API Management does not include load balancing function, so it should be used in conjunction with a load balancer to achieve the same.
In system architecture based on Azure, there is Azure Application Gateway which is a load balancer that runs on Layer 7 and provides more features than traditional load balancer ( Layer 4 ) in terms of routing traffic using routing decisions based on additional attributes of HTTP request or content of traffic. This can also be termed as an application load balancer. It shall be used in conjunction by Azure API Management (API gateway). Azure has a Traffic Manager for operating at DNS level which uses DNS to direct client requests to the most appropriate service endpoint based on a traffic-routing method and the health of the endpoints. Traffic manager also uses the rules configured at the DNS level and enables dstribution of the the load over multiple regions and data centers. Within every region or data center, there shall be application gateways coupled with load balancers such that, the application gateways shall help in determining the application server to fetch response from and the load balancer shall help in load balancing.
System overview based on Azure :
Here are few related references:
Azure Application Gateway - https://learn.microsoft.com/en-us/azure/application-gateway/application-gateway-introduction
Azure Load Balancer- https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-overview
Azure Traffic Manager - https://learn.microsoft.com/en-us/azure/traffic-manager/traffic-manager-overview
Scenario Architecture - https://learn.microsoft.com/en-us/azure/traffic-manager/traffic-manager-load-balancing-azure
There are two scenarios to consider here to clarify the confusion. I have clarified this using microservices example as this would make sense there only.
Scenario 1: You have a cluster of API Gateways
User ---> Load Balancer (provided by Cloud Providers like AWS or your own) ---> API Gateway Cluster ---> Service Discovery Agent (like eureka) ---> Microservice A ---> Client Side Load Balancer ---> Microservice B
Scenario 2: You have a single API Gateway
User ---> API Gateway ---> Service Discovery Agent (like Eureka) ---> Microservice A ---> Client Side Load Balancer -> Microservice B
I hope you understand why we required Load Balancer before the API Gateway in Scenario 1, as there we had multiple instances of API gateway also to handle the large traffic and to avoid the burden on the single api gateway since gateway itself can have several tasks to manage as per the requirements, so to distribute the load among them, we have load balancer.
Inside the microservices environment, you may need load balancing concept at multiple places. Like to accept the outside network you will maintain a load balancer provided by Cloud provider like AWS, eureka (Service Discovery) also acts like a load balancer if there are multiple instances of same service are registered with it and at last we also have client side load balancing (each microservice has its own client side load balancer that maintains a local cache) when your microservices are trying to communicate within them just to avoid burden on service discovery agent(eureka) by not checking every time with it the address for the other microservices.
Note: In the flow diagram, please don't confuse the path from API
Gateway --> Service Discovery --> to Microservice as if the Gateway is
forwarding request to Service Discovery that later routes it forward.
Gateway is just checking for the services registry from the Discovery
agents and then routing it to the correct microservice (Not through
the Service Discovery agent)
Load Balancer :
Its main purpose is to distribute by load balancing traffic between multiple back end systems.
We can configure different routes for different back end systems.
We get a static ip address for the load balances end points (usually not available with API gateways)
Can configure health checks (usually not available with API gateways)
In case of cloud providers, usually "Pay for idle as well"
API Gateway :
This as well routes traffic to back end systems based on URL
BUT, its main purpose is targeted towards "API management".
Below are such key features which are usually not available in "Load Balancers",
Can implement rate limiting, bursting limits
Can do request validation and request/response mapping
Usually cloud API gateways allows to export/import cross API platform using swagger spec
Allows caching responses
In case of cloud providers, usually "Pay per use"
DNS is responsible for routing the request to the nearest ip address inside network for a given domain name.
Api gateway is responsible for authentication, finding the correct apis(with or without load balancer) to call and circuit braking, response consolidation.
Load balancer is resposnible for distributing incoming request to different machine having same service deployed on them, on the basis of load or maybe round robin fashion.
So one way of doing it is
DNS TO GATEWAY TO LB
NOTE : LB can be placed before gateway depending upon traffic and use case

Load balancer for Azure Service Fabric Cluster on-premises

As developers we wrote microservices on Azure Service Fabric and we can run them in Azure in some sort of PaaS concept for many customers. But some of our customers do not want to run in the cloud, as databases are on-premises and not going to be available from the outside, not even through a DMZ. It's ok, we promised to support it as Azure Service Fabric can be installed as a cluster on-premises.
We have an API-gateway microservice running inside the cluster on every virtual machine, which uses the name resolver, and requests are routed and distributed accordingly, but the API that the API gateway microservice provides is the entrance for another piece of client software which our customers use, that software runs outside of the cluster and have to send requests to the API.
I suggested to use an Load Balancer like HA-Proxy or Nginx on a seperate machine (or machines) where the client software send their requests to and then the reverse proxy would forward it to an available machine inside the cluster.
It seems that is not what our customer want, another machine as load balancer is not an option. They suggest: make the client software smarter to figure out which host to go to, in other words: we should write our own fail-over/load balancer inside the client software.
What other options do we have?
Install Network Load Balancer Feature on each of the virtual machine to give the cluster a single IP address, is this even possible? Something like https://www.poweradmin.com/blog/configuring-network-load-balancing-in-windows-server/
Suggest an API gateway outside the cluster, like KONG https://getkong.org/
Something else ?
PS: The client applications do not send many requests per second, maybe a few per minute.
Very similar problem, we have a many services and Service Fabric Cluster that runs on-premises. When it's time to use the load balancer we install IIS on the same machine where Service Fabric cluster runs. As the IIS is a good load balancer we use IIS as a reverse proxy only for API Gateway. Kestrel hosting is using for other services that communicate by HTTP. The API gateway microservice is the single entry point for all clients and has always static URI inside SF, we used that URI to configure IIS
If you do not have possibility to use IIS then look at Using nginx as HTTP load balancer
You don't need another machine just for HTTP forwarding. Just use/run it as a service on the cluster.
Did you consider using the built in Reverse Proxy of Service Fabric? This runs on all nodes, and it will forward http calls to services inside the cluster.
You can also run nginx as a guest executable or inside a Container on the cluster.
We have also faced the same situation when started working with service fabric cluster. We configured Application Gateway as Proxy but it would not provide the function like HTTP to HTTPS redirection.
For that, we configured Nginx Instead of Azure Application Gateway as Proxy to Service Fabric Application.

How can I use Application Gateway to make load balancing to external URLs?

My company wants something like the Application Gateway to be a scalable entrypoint of all incoming requests, with SSL offloading, and balance those requests to external web servers, which are not on our Azure subscription, but belong to the company.
If Application Gateway is indeed the recommended way, how can I declare in the XML configuration file something like that? And if it's not, what's the best way I can achieve that?
application gateway is an option to achive this but by using an application gateway you use an AzureVM ressource. The scalability is ok but we have to pre-create more application gateways in case of a scale out. For a scale down you should also check first how to reroute traffic from the current gateway to the others before you scale down.
i would reccomend an another design. by using azure app-services. this is a webserver farm as a service. in the webserver is IIS running and you can create a forward / redirect reverse proxy or ARR. check out point like azure topics like Application request routing and reverse proxy:
http://blogs.iis.net/carlosag/setting-up-a-reverse-proxy-using-iis-url-rewrite-and-arr
http://www.iis.net/learn/extensions/url-rewrite-module/iis-url-rewriting-and-aspnet-routing
regards
patrick

Resources