What is the Azure Resource Manager equivalent of VIP Swap? - azure

Azure classic Cloud Services come with a built-in load balancer that allows a fast VIP swap from production to staging, and vice versa. What equivalent is provided by Azure Resource Manager? I can use DNS, but then I have the TTL delay.
I want the fast swap because my back-end servers are stateful and cannot process the same data in both staging and production without overwriting each other. In my current system, out-of-date connections (e.g. because of HTTP keep-alive) are rejected and a reload is forced, forcing fresh connections.
I guess I might be able to do it using Azure Application Gateway, but it is not listed as one of its features.

You can do VIP swap in ARM with 2 Azure load balancers by disassociating the public IPs, then reassigning them. It's not a fast deployment slot swap like you can do with cloud services however, as can take a minute to disassociate both IP addresses (you could speed this up by doing it in parallel), and based on your question you've already looked at this approach, but documenting it here as an option. There are some notes on this approach here: https://msftstack.wordpress.com/2017/02/24/vip-swap-blue-green-deployment-in-azure-resource-manager/

In Azure resource manager, there are three ways, Azure Load Balancer(layer 4), Application Gateway(layer 7) and Traffic Manager(DNS level). I think you can use Load Balancer in you scenario.
The following table helps understanding the difference between Load Balancer and Application Gateway:

Related

Trying to achieve simple fail over for two VMs hosted on Azure

i am running a web-based online application and trying to achieve HA.
i created two windows vmss in an availability set.
All i am looking for is a simple failover protocol, what i am trying to achieve is when my Main Vm is down for any reason,my incoming traffic redirects to my Backup VM till the main vm is up and running again.
I know that Azure Traffic Manager can achieve this by using the Priority type and setting end points for Public Ips that assigned to my vmss.
But the traffic manager is using DNS in order to route traffics, there are some downtown before the traffic manager redirect traffic to my backup vm.
Please check this answer as well for more info why Traffic manager is not the solution. -even when i use fast-intervals settings-
https://stackoverflow.com/a/34469575/10786981
i also can't use load-balancer. As i need the Active/Passive model and load-balancer can't support this model.
A 3rd Load Balancer are expensive and we are really looking in to a simple solution here.

Does Azure Networking/ VNet, Load Balancer, Application Gateway, Traffic Manager always need VM?

I am starting to read documentation on Azure Networking and every single example evrywhere gives an example of 2 Virutal Machines and then explains be it subnet, Traffic Manager, Load Balancer etc
Maybe its a dumb question But Can I do Load Balancing for Azure App Services, Azure DB, Storage Account etc without Virtual Machines ?
For PaaS services you cannot use load balancing for the most part, because it makes no sense. you would need to replicate data on your own. if you are to load balance between PaaS services. besides thats the idea behind PaaS, you dont have to care about PaaS. it just works.
But for webapps (and it makes sense) you can use load balancers\application gateway\traffic manager to balance load.
You need to understand network design and build a theory behind it.
Now, load balancing is required for HA and redundancy in a network, beyond which you extract the capability of the load balancer based on your requirements/Application.
So, PaaS services are not managed by completely by the user. You are paying a premium for using them and they will come with a built-in redundancy which you should opt for during deployment.
Load Balancing on Azure is Primarily "configurable" for IaaS only.

In Azure logic App how to do load balance between two logic app using load balancer

If two logic apps are there in two different regions and I want to do load balancing between these two how to do this.
Through some source, I got to know that it is possible through API management but they have not mentioned how to do this.
So, how to do load balancing between two logic apps?
Well...why do you want to do this? "Load Balancing" especially with LogicApps is fundamentally different on Azure than on-premise or self hosted. It's not wrong, just different ;)
What they were probably referring to was Azure Load Balancer which appears as a Networking Service, not APIM.
This, you can use to distribute requests as you would with traditional load balancers.
Since you want to load balance across regions I would look into Azure Traffic Manager. Traffic Manager is a DNS load balancer that sits outside/above Azure regions and allows you to have traffic balanced based on various performance profiles (i.e. Weighted, Performance, etc.)
High Level / General steps are:
Setup Logic Apps in 2 Regions
Create and register public dns domain for the logic apps - apps.foo.com (done outside Azure typically)
Point your DNS record for apps.foo.com to Azure Traffic Manager
Add endpoints to Azure Traffic Manager for Logic App in Region 1 and Logic App in Region 2 and setup your traffic manager profile
Calls to the Logic App start with the custom DNS domain get routed to ATM which then distributes to the regions based on your configured profile.

How do you set up Azure load balancing for micro-services?

We've got an API micro-services infrastructure hosted on Azure VMs. Each VM will host several APIs which are separate sites running on Kestrel. All external traffic comes in through an RP (running on IIS).
We have some API's that are designed to accept external requests and some that are internal APIs only.
The internal APIs are hosted on scalesets with each scaleset VM being a replica that hosts all of the internal APIs. There is an internal load balancer(ILB)/vip in front of the scaleset. The root issue is that we have internal APIs that call other internal APIs that are hosted on the same scaleset. Ideally these calls would go to the VIP (using internal DNS) and the VIP would route to one of the machines in the scaleset. But it looks like Azure doesn't allow this...per the documentation:
You cannot access the ILB VIP from the same Virtual Machines that are being load-balanced
So how do people set this up with micro-services? I can see three ways, none of which are ideal:
Separate out the APIs to different scalesets. Not ideal as the
services are very lightweight and I don't want to triple my Azure VM
expenses.
Convert the internal LB to an external LB (add a public
IP address). Then put that LB in it's own network security
group/subnet to only allow calls from our Azure IP range. I would
expect more latency here and exposing the endpoints externally in
any way creates more attack surface area as well as more
configuration complexity.
Set up the VM to loopback if it needs a call to the ILB...meaning any requests originating from a VM will be
handled by the same VM. This defeats the purpose of micro-services
behind a VIP. An internal micro-service may be down on the same
machine for some reason and available on another...thats' the reason
we set up health probes on the ILB for each service separately. If
it just goes back to the same machine, you lose resiliency.
Any pointers on how others have approached this would be appreciated.
Thanks!
I think your problem is related to service discovery.
Load balancers are not designed for that obviously. You should consider dedicated softwares such as Eureka (which can work outside of AWS).
Service discovery makes your microservices call directly each others after being discovered.
Also take a look at client-side load balancing tools such as Ribbon.
#Cdelmas answer is awesome on Service Discovery. Please allow me to add my thoughts:
For services such as yours, you can also look into Netflix's ZUUL proxy for Server and Client side load balancing. You could even Use Histrix on top of Eureka for latency and Fault tolerance. Netflix is way ahead of the game on this.
You may also look into Consul.io product for your cause if you want to use GO language. It has a scriptable configuration for better managing your services, allows advanced security configurations and usage of non-rest endpoints. Eureka also does these but requires you add a configuration Server (Netflix Archaius, Apache Zookeeper, Spring Cloud Config), coded security and support accesses using ZUUL/Sidecar.

automatic failover if webserver is down (SRV / additional A-record / ?)

I am starting to develop a webservice that will be hosted in the cloud but needs higher availability than typical cloud SLAs provide.
Typical SLAs, e.g. Windows Azure, promise an availability of 99.9%, i.e. up to 43min downtime per month. I am looking for an order of magnitude better availability (<5min down time per month). While I can configure several load balanced database back-ends to resolve that part of the issue I see a bottleneck at the webserver. If the webserver fails, the whole service is unavailable to the customer. What are the options of reducing that risk without introducing another possible single point of failure? I see the following solutions and drawbacks to each:
SRV-record:
I duplicate the whole infrastructure (and take care that the databases are in sync) and add additional SRV records for the domain so that the user tying to access www.example.com will automatically get forwarded to example.cloud1.com or if that one is offline to example.cloud2.com. Googling around it seems that SRV records are not supported by any major browser, is that true?
second A-record:
Add an additional A-record as alternatives. Drawbacks:
a) at my hosting provider I do not see any possibility to add a second A-record but just one... is that normal?
b)if one server of two servers are down I am not sure if the user gets automatically re-directed to the other one or 50% of all users get a 404 or some other error
Any clues for a best-practice would be appreciated
Cheers,
Sebastian
The availability of the instance i.e. SLA when specified by the Cloud Provider means the "Instance's Health is server running in the context of Hypervisor or Fabric Controller". With that said, you need to take an effort and ensure the instance is not failing because of your app / OS / or pretty much anything running inside the instance. There are few things which devops tend to miss and that kind of hit back hard like for instance - forgetting to configure the OS Updates and Patches.
The fundamental axiom with the availability is the redundancy. More redundant your application / infrastructure is more availabile is your app.
I recommend your to look into the Azure Traffic Manager and then re-work on your architecture. You need not worry about the SRV record or A-Record. Just a CNAME for the traffic manager would do the trick.
The idea of traffic manager is simple, you can tell the traffic
manager to stand after the domain name ( domain name resolution of the
app ) then the traffic manager decides where to send the request on
considerations of factors like Round-Robin, Disaster Management etc.
With the combination of the Traffic Manager and multi-region infrastructure setup; you will march towards the high availability goal.
Links
Azure Traffic Manager Overview
Cloud Power: How to scale Azure Websites globally with Traffic Manager
Maybe You should configure a corosync cluster with DRBD ?
DRBD will ensure You that the data on both nodes are replicated (for example website files and db files).
Apache as web server will be available under a virtual IP to which domain is pointed. In case of one server is down corosync will move all services to second server within few seconds.

Resources