Azure Traffic Manager not routing to correct data center - azure

We have deployed our Azure web application to two separate data centers (one located in West Europe and the other located in South east Asia) for purely performance reasons. We have a configured a Traffic Manager to route requests between these two DC's based on performance. However, when users in Shanghai try to access the site, they are routed to the DC in West Europe when ideally they should have been routed to SE Asia DC. Due to this, users in Shanghai are not seeing the intended performance benefit. However, it works well from India and Europe i.e. they are routed to the closest DC. What could be the problem? Also, is there a way to test traffic manager's performance based routing to see if it working as expected?
UPDATE:
I requested the users in Shanghai to use azure speed test to know the closest DC. They see "It looks like your nearest Data Center is West Europe. There appears to be a CDN Node nearer your location". When we use the above site from India, it shows "It looks like your nearest Data Center is Southeast Asia". My questions are:
Based on the above lookups, TM seems to be routing correctly even though India is closer to West Europe than Shanghai?
What is the additional information "There appears to be a CDN Node nearer your location" after data center that is displayed when looked up from Shanghai that is not shown from India? Could this "CDN Node" be making the Shanghai users to detect West Europe as the closest DC?

I think you are confusing two different concepts:
Physical proximity vs. network proximity (ie. latency)
CDN vs. WATM
A user's physical geographic proximity to a datacenter is only marginally related to which datacenter will be fastest. Around the world there are different peering agreements, interconnects, etc that can cause a geographically closer datacenter to have worse latency/bandwidth to a user than a geographically farther datacenter. This is why it is always important to test the latency from your users' location rather than just assuming by looking at a map. The azurespeedtest site you used is a great way to check the real-world performance of a user to an Azure datacenter, and the fact that it shows the same results as WATM means that WATM is performing correctly and your users are getting the fastest speeds possible.
CDN is a cache layer for static content and has lots of nodes throughout the world (see http://msdn.microsoft.com/library/azure/gg680302.aspx), and these nodes are in no way related to Azure datacenters. CDN also has nothing to do with WATM or which datacenter WATM would point a specific user to. If you have a lot of static content then you may want to consider adding a CDN endpoint in front of your site in order to cache the content closer to your users. See http://msdn.microsoft.com/en-us/library/azure/ee795176.aspx for more info.

Related

how to connect to geographically close datacenter

I am reading a book about distributed systems. One of the options of data replication mentioned is the use of a multi leader approach and place each leader in a different datacenter. The main point of different datacenters is to be geographically close to the user.
The author then discusses all the write conflicts that emerge by having multiple write leaders, but he doesn't say much on how to direct users to connect to geographically close data center.
For example, user in Austria makes a HTTP request to https://stackoverflow.com. Stackoverflow has datacenters in Germany and North America. DNS record point to the datacenter in US.
Is initial request always going to be pointing to the datacenter in US? I know that once a user is identified, I can instruct all AJAX and img requests to point to Germany (by chaning the html response I sent back), but initial requests, such as page reload will always point to US.
This kinda defeats the purpose of being geographically close to users if they always have to connect to the distant server at first and only after that, the inline resources are fetched from a nearby server.. Am I missing some essential principles here ?
It is very much possible to connect to geographically nearest datacenter. There are lots of companies providing this as a service. eg. Akamai, AWS, Google Cloud, Cloudflare.
This is generally done at DNS level. So when someone makes a request to your domain, the first request goes to the DNS server to resolve the domain-name to IP location. -> This is where the appropriate nearest server location gets resolved.
This is generally used for loadbalancing as well, and called DNS loadbalancing.
This is done through Geo DNS, most cloud service providers have this.
This article has a good explanation on how Geo DNS works.
For example, in AWS one could set up a Geo Location policy, this would let you choose the resources that serve your traffic based on the geographic location of your users, meaning the location that DNS queries originate from

Azure WebApp, scaling beyond 50 instances

I know that with the Premium tier, I could have up to 50 instances to put my web app on in Azure. If I needed to go beyond this, like 75 instances, what would be the most appropriate way to do this?
Maybe two different app service plans, different web app endpoints load balanced by Traffic Manager?
Thanks!
A Hosting Plan is simply a geographical collection of web servers. With in that hosting plan you can have 'x' number of servers (depending on the SKU)
The machines in a Hosting Plan will be split across fault and update domains. So that a server rack dying, or an upgrade rollout won't take out all of the servers in the hosting plan.
However what this doesn't protect you against is geographically scaled issues. If you have a hosting plan in West Europe and the West Europe region suffers an outage. At that point you could lose your entire deployment.
This is where them being a geographical collection of servers becomes an important characteristic. If you create a number of hosting plans in a number of regions, not only will you have local redundancy against fault and update outages but you will also gain redundancy against geographical outages.
Obviously if you need 500 servers, there is nothing stopping you creating 10 premium SKU hosting plans and deploying them all to the West Europe region and creating some sort of round robin DNS load balancing solution.
But the better solution is to share them across regions. Creating a hierarchy of traffic manager profiles to share the load amongst them. With the right automation you can have some regions coming on and off line as your load increases / decreases.
Personally, unless I have specifically required premium features (Biztalk etc) my preference has always been to simply deploy more service plans. It is far more cost effective.

Azure Traffic manager - Route by User IP Address

I have a webapplication in multiple Regions in the Azure Cloud and i'm using the Traffic Manager in Performance mode zu redirect the user to the closest Region.
What's concerning me is the following:
With this site https://www.whatsmydns.net i checked my Webapplication to see, which Datacenter is selected.
The funny thing is, that people from California gets redirected to the server in Westeurope but there is a Server in US Central too.
So from the site of the traffic manager the ping to the europe server is faster then to US central.
But i believe, that the difference between these too can not be high...
Now i have the fear, that it can happen that a user jumps between US Central and Europe all the time because he is in such a zone where the latencies to the available servers are nearly identical.
I also store files in a Azure Storage account in each region. If the user now jumps, i would have to transfer these files between the regions all the time...
So i was wondering if there is a possibility to redirect the user by his GEOIp to a specific region than by latency?
One of the benefit of the traffic manager is in my eyes that i can use one domain for all regions...
the only solution for my problem i can think of is a own cloudservice which replaces the traffic manager and redirects the user to the different regions by their IP like us-center.DOMAIN.com, we-eu.DOMAIN.com etc...
Are there any other solutions?
Thanks for your help!
Br,
metabolic
If you believe Traffic Manager is routing queries incorrectly, that should be raised with Azure Support.
Traffic Manager 'Performance' mode routing is based on an internal 'IP address to Azure data center latency map. The source IP of the DNS query (which is typically the IP of your DNS server) is looked up in the map to determine which Azure location will offer the best performance. There is an implicit assumption that the IP address of the DNS server is a good proxy for the location of the end user.
The 'Performance' mode in Azure Traffic Manager is deterministic. Identical queries from the same address will be routed consistently. The only exception is that routing may change during occasional map updates, which affect only a small %age of the IP address space.
A more common cause of routing changes is customers moving from place to place. For example, during travel, or simply by picking up a Wifi network that uses a DNS service in a different location, with a different IP address.
A Geo-IP based routing is not currently supported by Traffic Manager. However, please note that it would work in the same way as the 'performance' routing, just that it would use a different map. Users could still be routed to different locations as a result of map updates or changing DNS servers.
As you describe, if your application requires a strong, un-violable association between a user and a region, one option is re-direct users at the application level (e.g. via HTTP 302).

Azure VM IP geolocation?

I'm creating an application which uses an API that has black-listed countries. I'm currently developing this on an Azure VM. I'm in the UK and I'm allowed access to the API here, however my VM's IP shows up as being in the States which is disallowed. I believe a European IP would work.
Is there any way around this? My VM/Cloud Service location is West Europe. I've tried configuring a reserved public IP for West Europe also but this hasn't worked. I guess my understanding of what that actually does is flawed.
Thanks in advance.
Your understanding seems allright. Depending on the region you choose, your Cloud Service will be physically located in different Azure datacenters. For West Europe it would be Netherlands.
The thing that is flawed is IP Geolocation. These kind of services are based on databases that can be inaccurate. Microsoft for example can register some large IP address space as US, but they can assing some of these addresses to a datacenter in Netherlands - and there is no way for the Geolocation service to know about it.
For my services that are also in West Europe, some of them are reported be in US, some in NL - and different geolocation services gives different results.

Why does Azure Traffic Manager resolve Australia to US Servers?

I have a website hosted in Azure, which is globally load balanced across 3 different Azure data centres.
We can see from the following DNS check, that requests coming from the US, resolve to my West-US data centre. In and around Europe go to my European DC. South east asia goes to East Asia fine, but the entire of Australia gets routed to the US.
https://www.whatsmydns.net/#A/www.whatsonglobal.com
Being an Australian resident, and im sure for our Australian customers this isn't great. Especially since it's currently adding an extra 3 seconds of load time to the homepage.
How do I fix this without having to choose a different load balancer? I like the simplicity of the Azure traffic manager, but only if it's up to scratch.
Patrick, first off, the whatsmydns.net URL shows that the entire of Australia is not going to the US. 2 locations are going to Europe, and 1 location is going to US.
Azure constantly probes the LDNS servers around the world from all datacenters and regularly updates the performance tables in order to route users to the 'closest/fastest' datacenter. The fastest datacenter is usually based on the routing and peering relationships between ISPs, so it may not always be the geographically closest datacenter.
Most likely your users are getting faster performance from the website selected by the WATM endpoint than they would be from any other website, but you can validate this by trying to browse directly to the website URLs. If you find that WATM is not sending users to the fastest datacenter then you can open a support incident to have the Azure team investigate the routing and latency table.
Patrick - the behaviour is most likely due to there being no local footprint (yet) for Azure in Australia. There are CDN endpoints located in Australia and that's about it right now.
If you've got a site hosted in Azure then you'll be in Singapore, West US or any of the existing Regions anyway so the latency for users in Australia wouldn't be affected by hitting Traffic Manager in the US.

Resources