GeoIP Routing with Windows Azure - azure

I'm working on a somewhat large project that will eventually be loaded on Azure. The idea is we will have multiple compute nodes all over the world as our customer base is potentially that large. The question I have is this:
If I have nodes in the US, Europe, Asia, etc. for DR and load balancing reasons how can I combine the idea of Geo-based DNS results with Azure since our application will simply be a CNAME for our URL?
I'm not sure I quite understand the deployment strategy for one application running out of multiple regions with Azure. Does anyone have any links or references to better understand the model?
Mod Note: Not sure if this should be ServerFault but I thought StackOverflow was a better location.
Thanks,
Brent

Look at the Windows Azure Traffic Manager it allows you to group deployments across regions as one logical service and automatically routes a request to the nearest region.

Related

Multi regional Azure Container Service DC/OS clusters

I'm experimenting a little with ACS using the DC/OS orchestrator, and while spinning up a cluster within a single region seems simple enough, I'm not quite sure what the best practice would be for doing deployments across multiple regions.
Azure itself does not seem to support deploying to more than one region right now. With that assumption, I guess my only other option is to create multiple, identical clusters in all the regions I wish to be available, and then use Azure Traffic Manager to route incoming traffic to the nearest available cluster.
While this solution works, it also causes a few issues I'm not 100% sure on how I should work around.
Our deployment pipelines must make sure to deploy to all regions when deploying a new version of a service. If we have a East US and North Europe region, during deployments from our CI tool I have to connect to the Marathon API in both regions to trigger the new deployments. If the deployment fails in one region, and succeeds in the other, I suddenly have a disparity between the two regions.
If i have a service using local persistent volumes deployed, let's say PostgreSQL or ElasticSearch, it needs to have instances in both regions since service discovery will only find services local to the region. That brings up the problem of replication between regions to keep all state in all regions; this seem to require some/a lot of manual configuration to get to work.
Has anyone ever used a setup somewhat like this using Azure Container Service (or really Amazon Container Service, as I assume the same challenges can be found there) and have some pointers on how to approach this?
You have multiple options for spinning up across regions. I would use a custom installation together with terraform for each of them. This here is a great starting point: https://github.com/bernadinm/terraform-dcos
Distributing agents across different regions should be no problem, ensuring that your services will keep running despite failures.
Distributing masters (giving you control over the services during failures) is a little more diffult as it involves distributing a zookeeper quorum across high latency links, so you should be careful in choosing the "distance" between regions.
Have a look at the documentation for more details.
You are correct ACS does not currently support Multi-Region deployments.
Your first issue is specific to Marathon in DC/OS, I'll ping some of the engineering folks over there to see if they have any input on best practice.
Your second point is something we (I'm the ACS PM) are looking at. There are some solutions you can use in certain scenarios (e.g. ArangoDB is in the DC/OS universe and will provide replication). The DC/OS team may have something to say here too. In ACS we are evaluating the best approaches to providing solutions for this use case but I'm afraid I can't give any indication of timeline.
An alternative solution is to have your database in a SaaS offering. This takes away all the complexity of managing redundancy and replication.

Setting up High Availability on Azure

We are in the process of moving our current IAAS solution over to azure where we host asp.net LOB web applications in IIS with an SQL Server backend.
I’m looking into availability sets. I stumbled upon this article at: http://michaelwasham.com/windows-azure-powershell-reference-guide/understanding_configuring_availability_sets_powershell/
This seems like how we want to setup our deployment on Azure where the “Web Servers” are our virtual machines. My question relates to how we setup our virtual machines. Currently we have about 200 separate hosted customers on our IAAS solution which means 200 separate web applications in IIS. With a highly available deployment should the Virtual machines be exact replica’s of each other i.e. 200 customers on box 1 and that again on box 2. Or should we spread them over multiple boxes i.e. 0-50 customers on box 1, 50-100 customers on box 2 and so on.
I cant see how the 2nd option spread would work in a highly available set because if 1 box goes down than all the customers on it go down with it?
Little confused, hoping someone has got advice on this?
Thanks
It would be best to duplicate everything, the azure load balancer (layer 4 balancer) ensures that load is spread across listening endpoints evenly and randomly, so you cannot know which server will answer a request, hence you must have the same configuration on both boxes. Here is some info on the Azure load balancer which you might find interesting, link
Also put them into an availability set so if one of the vm dies for some reason or during updates (which can happen from time to time with vms) you can be sure that at least one of your vms will always be online. You may need more than two vms, it really depends on the amount of traffic each of your clients generate and the load that that creates. But do have atleast two vms.
Just a side note if you haven't used Azure before; it may take you a bit of time to get the cost per vm right over time, but remember to use the schedule to scale up and down to reduce costs and anticipate load on your vms. Also having 4 smaller vms is better than 2 larger vms from a failure point of view and ultimately cost the same over a month. If one large vm dies, you've lost 50% of your capacity to serve your clients, where as if you lost one smaller vm you've only lost 25%.

How to deal with the recent Azure outage (Azure Websites)?

We have TONS of websites hosted on Azure. Our VMs appear to be running now, but many of our Azure Websites are not. In an effort to bring our sites back up sooner than later, we have tried scaling UP, OUT, and changing our hosting plan, to no avail. Is there a way to force an Azure Website VM to move to another (working) datacenter? We don't want to destroy the site and bring it back up, as we will be forced to update DNS, which will cause an even longer delay in service to our customers.
Any help is greatly appreciated.
Sorry to everyone else experiencing a long night right along with me.
Your best bet is to run two instances of the site in two Regions and use something like Traffic Manager (or AWS Route 53 if you want something external to Azure) to perform failover routing for you.
Depending on the type of sites you could run a static holding site in a non-Azure environment and failover to that. How you choose to solve this will depend on what your budget is (or opportunity cost in the event your sites are offline).
Note that a 99.9% yearly SLA equates to almost 9 hours of downtime in a year.
If you want to understand how you could solve this intra-Azure here's a good guide: http://blog.kloud.com.au/2014/11/03/deploy-an-ultra-high-availablity-mvc-web-app-on-microsoft-azure-part-1/

azure subscription info

I am a newbie to web development
I would like to host my site in azure.
There are so many subscriptions plans.
So which subscription is reasonably good and give me price details of that?
Thanks in advance
Windows Azure has few types of hosting. For a website you might want to look at the following -
Web Sites - You can host right away without modification of your existing project.
Cloud Services - I used this, but it requires changes such as Caching.
Here is the calculator based on your need.
FYI: Rule of thumb is you need a least two instances in Production to minimize the downtime.
If you are a newbie , I would strongly suggest using azure websites for now, and you can always move to a custom solution using webroles/caching Etc later if you feel it doesn't cater all your needs..
Azure websites pricing can be obtained from here :
https://azure.microsoft.com/en-us/pricing/details/web-sites/
Again on what parameters would you choose the right package, you are the best judge for that since you know what traffic are you expecting and how much memory etc you need

Minimize downtime in Azure

We are experiencing a very serious unscheduled downtime of our Azure application today for what is now coming up to 9 hours. We reported to Azure support and the ops team is actively trying to fix the problem and I do not doubt that. We managed to get our application running on another "test" hosted service that we have and redirected our CNAME to point at the instance so our customers are happy, but the "main" hosted service is still unavailable.
My own "finger in the air" instinct is that the issue is network related within our data center (west europe), and indeed, later on in the day the service dash board has gone red for that region with a message to that effect. (Our application is showing as "Healthy" in the portal, but is unreachable via our cloudapp.net URL. Additionally threads within our application are logging sql connection exceptions into our storage account as it cannot contact the DB)
What is very strange, though, is that the "test" instance I referred to above is also in the same data centre and has no issues contacting the DB and it's external endpoint is fully available.
I would like to ask the community if there is anything that I could have done better to avoid this downtime? I obeyed the guidance with respect to having at least 2 roles instances per role, yet I still got burned. Should I move to a more reliable data centre? Should I deploy my application to multiple data centres? How would I manage the fact that my SQL-Azure DB is in the same datacentre?
Any constructive guidance would be appreciated - being a techie, I've never had a more frustrating day being able to do nothing to help fix the issue.
There was an outage in the European data center today with respect to SQL Azure. Some of our clients got hit and had to move to another data center.
If you are running mission critical applications that cannot be down, I would deploy the application into multiple regions. DNS resolution is obviously a weak link right now in Azure, but can be worked around (if you only run a website it can be done very simply using Response.Redirects or similar)
Now, there is a data synchronization service from Microsoft that will sync up multiple SQL Azure databases. Check here. This way, you can have mirror sites up in different regions and have them be in sync with SQL Azure perspective
Also, be a good idea to employ a 3rd party monitoring service that would detect problems with your deployed instances externally. AzureWatch can notify or even deploy new nodes if you choose to, when some of the instances turn "Unresponsive"
Hope this helps
I can offer some guidance based on our experience:
Host your application in multiple data centers, complete with Sql Azure databases. You can connect each application to its data center specific Sql Server. You can also cache any external assets (images/JS/CSS) on the data center specific Windows Azure machine or leverage Azure Blog Storage. Note: Extra costs will be incurred.
Setup one-way SQL replication between your primary Sql Azure DB and the instance in the other data center. If you want to do bi-rectional replication, take a look at the MSDN site for guidance.
Leverage Azure Traffic Manager to route traffic to the data center closest to the user. It has geo-detection capabilities which will also improve the latency of your application. So you can redirect map http://myapp.com to the internal url of your data center and a user in Europe should automatically get redirected to the European data center and vice versa for USA. Note: At the time of writing this post, there is not a way to automatically detect and failover to a data center. Manual steps will be involved, once a failover is detected and failover is a complete set (i.e. you will failover both the Windows Azure AND Sql Azure instances). If you want micro-level failover, then I suggest putting all your config the in the service config file and encrypt the values so you can edit the connection string to connect instance X to DB Y.
You are all set now. I would create or install a local application to detect the availability of the site. A better solution would be to create a page to check for the availability of application specific components by writing a diagnostic page or web service and then poll it from a local computer.
HTH
As you're deploying to Azure you don't have much control about how SQL server is setup. MS have already set it up so that it is highly available.
Having said that, it seems that MS has been having some issues with SQL Azure over the last few days. We've been told that it only affected "a small number of users". At one point the service dashboard had 5 data centres affected by a problem. I had 3 databases in one of those data centres down twice for about an hour each time, but one database in another affected data centre that had no interruption.
If having a database connection is critical to your app, then the only way in the Azure environment to ensure against problems that MS haven't prepared against (this latest technical problem, earthquakes, meteor strikes) would be to co-locate your sql data in another data centre. At the moment the most practical way to do this is to use the synch framework. There is an ability to copy SQL Azure databases, but this only works within a data centre. With your data located elsewhere you could then point your app at the new database if the main one becomes unavailable.
While this looks good on paper though, this may not have helped you with the latest problem as it did affect multiple data centres. If you'd just been making database copies on a regular basis, that might have been enough to get you through. Or not.
(I would have posted this answer on server fault, but I couldn't find the question)
This is just about a programming/architecture issue, but you amy also want to ask the question on webmasters.stackexchange.com
You need to find out the root cause before drawing any conclusions.
However. my guess one of two things was the problem
The ISP connectivity differs for the test system and your production system. Either they use different ISPs, or different lines from the same ISP. When I worked in a hosting company we made sure that ou IP connectivity went through at least two different ISPS who did not share fibre to our premises (and where we could, they had different physical routes to the building - the homing ability of backhoes when there's a critical piece of fibre to dig up is well proven
Your datacentre had an issue with some shared production infrastructure. These might be edge routers, firewalls, load balancers, intrusion detection systems, traffic shapers etc. These typically are also often only installed on production systems. Defences here involve understanding the architecture and making sure the provider has a (tested!) DR plan for restoring SOME service when things go pair shaped. Neatest hack I saw here was persuading an IPS (intrusion prevention system) that its own management servers were malicious. And so you couldn't reconfigure it at all.
Just a thought - your DC doesn't host any of the Wikileaks mirrors, or Paypal/Mastercard/Amazon (who are getting DDOS'd by wikileaks supporters at the moment)?

Categories

Resources