I am trying to configure applications on 2 different Azure sites having their local load balancing capabilities. I can use Traffic manager to distribute the traffic and have weighted routing to force everything to my primary site.
But i want this to occur automatically where i can map a service pointing to the internal load balancers at both sites and evaluate the sites are up and running or not to decide where to forward the traffic. This will allow me not to manually configure the Traffic Manager in case of disaster.
Note : The services are hosted on IIS on IaaS VMs. ILB1 and ILB2 are respective loadbalancer for Site1 and Site2.
Any help is appreciated!
Thanks
As far as I know, we can't add internal load balancer as traffic manager endpoints.
But I want this to occur automatically where I can map a service
pointing to the internal load balancers at both sites and evaluate them
sites are up and running or not to decide where to forward the
traffic.
By default, we can set multiple sites around the world with traffic manager, traffic manager will probe the health of all sites, forward network traffic to the right site.
We can use traffic manager profile to manage network traffic, traffic Manager profiles use traffic-routing methods to control the distribution of traffic to your cloud services or website endpoints.
For example, we create website 1 on site 1 (primary site), create website 2 on site 2. If we use the weighted method, network traffic will to site 1. When site 1 is down, traffic manager will know site 1 was down, will route network traffic to site 2.
Traffic manager works as a DNS level Load Balancer, it will route network to the available site by default.
About traffic manager probe settings, we can via the Azure portal to modify it, like this:
By the way, if you want to use traffic manager, we can add public IP address to traffic manager endpoint.
Update:
As a workaround, we can deploy a S2S VPN between two locations, and use Haproxy to work as load balancer, then add two VMs to public load balancer, like this:
We can use Haproxy to set primary website, more information about Haproxy, please refer to this link.
Related
I currently have 3 traffic managers, 1 entry point for our domain, which does geolocation routing to 2 other traffic managers. One global, one for the US.
These traffic managers are priority traffic managers which point to application gateways. By having the priority traffic managers, it allow us to have a 'failover' if one site / application gateway goes down.
The reason we have a application gateway in the different countries is to allow path manipulation so if the user is from the US, they get a /us path instead of a /.
I have configured our CNAMES like www. and blog. in the application gateways for both, global and US which works fine. I can point the CNAME records to the entry traffic manager no problem.
The problem I have having is pointing the A record root domain to the traffic manager. Since traffic managers don't have IP addresses, I get an error because in Azure, the root domain can be pointed at a traffic manager, but only one that uses external endpoints using a IP Address.
Has anyone else ran into this issue and have a way to solve it?
Thanks
Adding a root/apex domain to Azure Traffic Manager should be possible as it is integrated with Azure DNS. So, you should be able to create A record to ATM as shown below from Azure DNS,
Question: How do I host an endpoint in azure which allows me to redirect internet traffic at will between azure and aws services?
I am hosting two kubernetes clusters - one in Azure and the other in AWS. I want to be able to:
1. redirect the traffic at will to either aws or azure, whilst retaining the public dns endpoint.
2. fail over manually [and pref automatically too] to the aws cluster. What is the best way to host the endpoint in azure?
Requirements:
The traffic needs to be redirected immediately - no caching issues and stale loads!
Ability to configure failover - i.e. specify that Azure is hot and AWS is the failover service - the traffic should be automatically redirected as soon as Azure goes down.
I have looked at Traffic Manager, Load Balancers and Application Gateway. Not sure which one (if any) of these is best.
traffic manager wont work for you, since its a dns service, so caching will happen (admittedly its the best solution if you set dns cache to 5 seconds or something). application gateway allows you to specify an ip address as an endpoint, load balancers only work when attached to vms inside azure. But application gateways dont allow to failover at will. you would need to block the probe to failover.
Azure Front Door might be the solution for you (like the other answer mentions)
You can have a look into Azure Front Door Service for your usecase.
Look into this https://learn.microsoft.com/en-gb/azure/frontdoor/front-door-overview
I have a use case where my cluster has 3 VMs working as head node in HPC Pack and a bunch of other VMs working as compute nodes.
So basically, after creating this cluster, i must install a special HCP client, from this client, i type the DNS name of each of VMs to access the HPC management interface.
For example: https://head-node-1.azure.com
Of course, if i access this DNS from Chrome, i only see IIS page.
I wants to create a load balancer with its DNS name. Let's say https://load-balancer.azure.com
So from my client, every time i access load balancer DNS name, i can see the management interface, not IIS page.
How can i do that?
Not sure I'm understanding you correctly. Basically, Azure Application Gateway supports URL path-based routing rules.
Actually, Application Gateway supports web-based traffic load balancing. [Azure load balancer][2] supports stream-based traffic. If you want to listen to the protocol HTTP or HTTPS, you can use Application Gateway. Per your description, you could not access HPC management interface from web explorer, you could use a 4 layer load balancing based on TCP/UDP.
So you could create a public-facing load balancing and add the head node VMs as the backend pools. Create a health probe and load balancing rules to specify the ports you want to listen for your HPC management interface on the each of VMs.
Hope this helps, let me know if you have any concerns.
I'm trying to build a simple two-tier wordpress environment on CentOS 7.2 in Azure.
I've defined a virtual network, have connected it to my home-lab via IPsec VPN, and I've defined several subnets in Azure (for Web tier, SQL tier, and utility tier role segregation using Network Security Groups).
I have two web-tier VMs, both members of the same Availability Set, and are both on the web-tier subnet. They have internet access (outbound), I can SSH to them from my home-lab, and the seem fine operationally to me - httpd is listening on 80/tcp, and I can hit the web pages from my home-lab network by visiting each web server directly on its 192.168.x address.
I should mention my web servers DO NOT have public IPs assigned, but I can't see this being an issue.. they're intended to be behind the load balancer.
So, I've created a Load Balancer, and:
assigned a public IP to the LB
added a backend pool (selected my availability set, and chose my two web servers)
added a probe (http probing the two web servers)
added a load balancer rule
Notice I did NOT add an inbound NAT rule. I can't figure out what that's for, or if I need it.
On my web tier, I tcpdump port 80 and see the probes. In httpd logs, I see 200 success messages for the probes. I go to a web browser, hit the external VIP I assigned to the LB, and nothing. It just times out. I cannot connect to the LB VIP.
What am I missing? What are the NAT rules about?
Any help would be appreciated. All I can find online are examples doing this in powershell etc.. and I'm using the Azure web interface.
Thanks!
Edit: Found the issue - Needed the NSG to allow not just the AzureLoadBalancer, but "Internet" to hit port 80/tcp. Should have thought of that sooner..
Found the issue - Needed the NSG to allow not just the AzureLoadBalancer, but "Internet" to hit port 80/tcp. Should have thought of that sooner..
I want to provide failover proof url for my service endpoint to users using traffic management. I have a service instance running at http://vm1.cloudapp.net/myservice:8888/index.html. If this instance goes down then the service auto starts on vm2 at http://vm2.cloudapp.net/myservice:8888/index.html n vice versa.
I want azure to hide the underlying service urls to user and expose the service at http://myservice.trafficmanager.net
Is this possible? If so, how ? From reading the documentation of traffic manager service, it looks like you can failover only at DNS level and not at url endpoint level
There are several parts to this.
Firstly, you are right that Traffic Manager works at the DNS level. It doesn't see your HTTP traffic and hence doesn't see the full URL. Since your two services instances have different DNS names, there's no issue here--you configure Traffic Manager with both names as separate 'endpoints', and Traffic Manager will direct traffic to those endpoints by providing one or other in each DNS response.
Secondly, you want to hide the URL paths. Since Traffic Manager works at the DNS level, it doesn't see your HTTP traffic and hence doesn't see the URL, only the domain name. Therefore this is something you have to handle at the application level (just as you would for a single-instance service that doesn't use Traffic Manager).
The only thing to be careful of is to make sure you configure the correct URL port and path in the Traffic Manager endpoint monitoring configuration. Just make sure that Traffic Manager shows your endpoints as 'Online', and you're good.
Jonathan