Improve CPU Utilization by Restructuring Nodes - azure

We have a database located in North Europe region with 2 nodes of AppServices on Azure (West Europe & North Europe). We use traffic manager to route traffic.
Our SQL database and storage are located in Northern Europe.
When we started the website, European locations were the closest to our customers.
However, we saw a shift and most of our customers now are from USA.
We have high CPU utilization on our processors although we have a lot of instances on each.
The question is:
Since most of our customers are from USA and it's hard to relocate the database, is it better to keep the app structure as it is (N. Europe & W. Europe) or create a new node in USA but this node will still need to communicate with the database in North Europe?
Thank you

Having you app in US region and Database in Europe is not recommended.
These are a few of the things you will run into:
1) High latency since the queries for data will have to round-trip to Europe to get this.
2) Higher resource utilization since in general each request that access the DB will take longer, this will increase memory usage while requests are waiting on data it will also make the impact of load a lot more severe on the app.
3) cross region data egress, you will need to pay for all the data moving from Europe to us every-time there is a query.
A better solution would be to do the following:
1) Setup a new DB in us region and hook up active geo-replication
At this point you will have a hot/cold configuration where any instance can be used to read data form the DB but only the primary instance can be used for write operations.
2) Create a new version of the App/App Service plan in US region
3) Adapt your code to understand your geo distributed topology.
You App should be able to send all reads to the "closest" region and all writes to the primary database.
4) Deploy the code to all regions
5) add the new region to TM profile
While this is not ideal since write operation might still have to jump the pond, most apps have a read write patter than is heavily askewed towards read operations (roughly 85% reads / 15% writes ) so this solution works out with the added benefit of giving you HA in case one of the regions goes down.
You might want to look at this talk where I go over how to setup a geo distributed app using App Service, SQL Azure and the technique outlined above.

Have you considered sharding your data based on the location of your users? In terms of performance it will be better, You can provide maintenance on off-peak hours of each region. Allow me to recommend you this article.

Related

Azure AKS low latency infrastructure

What would be a good infrastructure setup to ensure the minimum latency for the users in the following conditions:
One single AKS cluster in Europe
Users from multiple regions: US, Europe, Australia
Latency around 50 milliseconds or less
Is there any way to use the Azure network backbone to ensure this? Any input is welcome. I know this is not the ideal setup in case of a regional failure, I just want to hear what would be the possible options to improve the latency.
You could look at Azure Virtual WAN but even that will not meet your 50ms or less criteria between Australia and the AKS cluster in Europe or US to Europe. The only way to get to those numbers that I know of would be to deploy multiple AKS clusters in multiple regions.
We publish our monthly round-trip latency figures and you can see leaving any continent is about 120-250 ms. That's just the pure physics of where technology is right now. Theoretical pings traveling at the speed of light from the US to Europe alone is 40 ms and that's if you only traverse one router.
https://learn.microsoft.com/en-us/azure/networking/azure-network-latency

Change cloud service region

Is it possible to change a Cloud Service region (i.e: move from East US to West US)?
I don't see an option from the management console to do it or maybe I did not dig deep enough.
I would like to do it since I have my database in one region different to my application's and I guess it could decrease performance.
Thanks,
No, there is no way to change Cloud Service region. You have to create new Cloud Service in desired region and redeploy there. It becomes more complicated when you also have Storage accounts with data which you have to move. For this you could probably use Red Gate's Cloud Services or other mature product.
And you are right about Database and performance. It is not only performance, but also costs savings. When your Database is in different geographic region all data that comes out of your database is basically Outbound (Egress) traffic, which is being charged per GB!
You can make your own script using Powershell that is a powerful tool and can help you a lot, including copying the data between regions directly (not passing by your computer). I am going to do that know.

Azure Traffic Manager Load Balance Options

I tried to dig on MSDN but could not get concrete statement for which is the best load balancing method.
could someone please share some light on which of the below are best option for given scenario:
Performance
Failover
Round Robin.
Scenario:
x Web Roleshosted on Large VM on single data center.
Requirement:
must be 100% up 24x7.
Thank you.
First: Do you really want to offer a 100% uptime SLA for your customers, when Azure itself doesn't offer 100% in its SLA's?
That said: Traffic Manager only load-balances your compute, not your storage. So if you're trying to increase uptime by having a set of backup compute nodes running in another data center, you need to think about data access speed and cost:
With round robin, you'll now have distributed traffic across multiple data centers, guaranteed, and constantly. And if your data is in a single data center (which is a good idea to have data in a single System of Record, unless you have replication logic all taken care of), some of your users are going to see increased latency as the nodes separated from your data are going to be requesting data across many miles (potentially between continents). Plus, data egress has a $$$ cost to it.
With performance, your users are directed toward the data center which offers them the lowest latency. Again, this now means traffic across multiple data centers, with the same issues as round robin.
With failover, you now have all traffic going to one data center, with another designated as your failover data center (so it's for High Availability). In the event you have an outage in the primary data center, you'd now have a failover data center to rely on. This may help justify the added latency and cost, as you'd only experience this latency+cost when your primary app location becomes unavailable for some reason.
So: If you're going for the high availability route, to help approach the 100% availability mark, I'm guessing you'd be best off with the failover model.
Traffic manager comes into picture only when your application is deployed across multiple cloud services within same data center or in different data centers. If your application is hosted in a single cloud service (with multiple instances of course) , then the instances are load balanced using Round Robin pattern. This is the default load balancing pattern and comes to you without any extra charge.
You can read more about traffic manager here: https://azure.microsoft.com/en-us/documentation/articles/traffic-manager-overview/
As per my guess there can not be comparison which is best load balancing method of Azure Traffic manager. All of them have unique advantages and vary depending on the requirement of application. Most common scenario is to use performance load balancing option with azure traffic manager. But as Gaurav said, you will have to have your cloud service application hosted on more than one cloud services. If you wish to implement performance load balancing then here is the link to get you started - http://sanganakauthority.blogspot.com/2014/06/performance-load-balancing-using-azure.html

Migration from server hosting in a DC to Azure

I run my own uk based hosting and web design company.
We have about 10 physical servers in a DC in the UK and host about 300 or so web sites, email servers and web applications. They are all on a windows server platform with a few linux VM's.
I now have a Windows Azure account, I have set up a medium windows 2008 server within my azure account and want to start using it to maybe host and migrate some of my web sites and services onto my azure account and new server. With the view that maybe I could move ALL my services over and get rid of the need for any of my physical servers in the DC.
My question that I am still really struggling with how much this will really cost me on an ongoing basis.
The billing area, doesnt really tell me much as it simply shows my bill as £0.00. It shows my usage but I am really struggling to compare the resources I am currently using compared to how its billed in azure? It doesnt even show me what it would have cost me if I werent ona trial.
I dont want to move web hosted sites over if its going to cost me more than hosting in my current DC.
I was thinking of moving many sites onto the new server i have set up as its a better spec than a few of my current servers, so would see a big benefit, I even considered setting up a much larger Server in my Azure account but again unsure as to the real cost of that box its hard to compare.
Do I simply need to look at the calculator and select the number of servers i wil deploy, select how much storage I need and bandwidth? Or do I need to look at the items in the billing area as well - such as:
Compute units,Storage Transactions,Data Transfer Out,Data Transfer In
When I set up the server it didnt ask me for how much storage I wanted it just set it up with about 150GB avaialble in the actual server.
Any advice as I really see this as something i want to use over the next 12 months, but not if once i have finally migrated stuff its going to cost me more than my normal hosting and i have to move stuff all back at the end of the 12 months.
Cheers
Because you're using Windows Azure Virtual Machines, you should first use the virtual machine pricing calculator. This calculator only displays the costs that are relevant for your scenario except for the storage transaction cost. Here is a breakdown of the costs you'll have to consider:
Virtual Machines
The Virtual Machine cost appears on the bill as compute units. Throughout the Windows Azure Virtual Machine preview, the cost per core per hour is $0.08. Once VMs reach general availability, the cost will be $0.115 per core per hour for Windows VMs and $0.085 for Linux VMs. Using the calculator, you can see that a medium instance uses two cores and will therefore be billed at $0.16 per hour during the preview period. You will have to use your best judgement to determine how many virtual machines you'll need and how large they should be.
Storage
You will have to pay for the data actually used within your VHDs. Let's assume you have one virtual machine with one VHD attached. If the size of the VHD 200GB, but only 100GB is used, you will have to pay for 100GB per month.
Bandwidth
Microsoft now only charges for egress data transfers (data going out of the data center). With this pricing change, the Data Transfer In section of the billing area will always be 0.00. Hopefully, you already have a good idea about your current outbound data usage. If so, you can calculate your bandwidth cost by simply moving the bandwidth slider to the correct spot.
Storage Transactions
If you scroll down to the Transactions section of this blog post, you'll see how storage transactions are counted. Basically, you count one transaction per write operation and possibly one transaction per read operation depending if the data is cached or not. The cost of storage transactions are negligible because you only have to pay one cent per 100,000 transactions. That's why storage transactions are left out of the calculator.
HTH
To answer such question in an input box has limitation to express in details. The cost calculator is there to give you an estimate of upper limit about what the cost will be if your usage are under selected limit. Based on my personal experiences if you choose higher limits of usage and keep the usages within your forecast limits, there will be no hidden charges. But the reality could be far different because you may not estimate the usages correctly at first and this could change the cost later.
For moving a traditional web hosting solution to Windows Azure, latest release of Windows Azure Virtual Machine is best fit as this requires minimum migration complexity. So the VM size you will choose will have fixed resources (compute, local storage, network bandwidth, disk I/O etc) and the cost will be fixed as long as you are under limit so there will not be unseen charges.
Windows Azure Storage is pay as you go (ranges ~$0.012/GB depend on usage limit) and there is no limit. When moving from traditions web hosting to Cloud environment, due to application architecture design, I have seen less Cloud storage usage and more VM storage so it may not cost a lot.
The place you will see cost variation is data egress/ingress and it is difficult to forecast as it is all depend on application usage, so this is something you will have to account as variable cost.
You can also contact Windows Azure Virtual Machine Forum where dedicated Windows Azure Virtual Machine resources are available to answer your such questions.
Finally One thing I would also add that Windows Azure Virtual Machines are still in preview mode so it would be best to bring some of your business to Windows Azure VM as trial and testing purpose because now matter what you think you may encounter problems (because it is preview release) and this could case service disruption.

How do I make my Windows Azure application resistant to Azure datacenter catastrophic event?

AFAIK Amazon AWS offers so-called "regions" and "availability zones" to mitigate risks of partial or complete datacenter outage. Looks like if I have copies of my application in two "regions" and one "region" goes down my application still can continue working as if nothing happened.
Is there something like that with Windows Azure? How do I address risk of datacenter catastrophic outage with Windows Azure?
Within a single data center, your Windows Azure application has the following benefits:
Going beyond one compute instance, your VMs are divided into fault domains, across different physical areas. This way, even if an entire server rack went down, you'd still have compute running somewhere else.
With Windows Azure Storage and SQL Azure, storage is triple replicated. This is not eventual replication - when a write call returns, at least one replica has been written to.
Ok, that's the easy stuff. What if a data center disappears? Here are the features that will help you build DR into your application:
For SQL Azure, you can set up Data Sync. This facility synchronizes your SQL Azure database with either another SQL Azure database (presumably in another data center), or an on-premises SQL Server database. More info here. Since this feature is still considered a Preview feature, you have to go here to set it up.
For Azure storage (tables, blobs), you'll need to handle replication to a second data center, as there is no built-in facility today. This can be done with, say, a background task that pulls data every hour and copies it to a storage account somewhere else. EDIT: Per Ryan's answer, there's data geo-replication for blobs and tables. HOWEVER: Aside from a mention in this blog post in December, and possibly at PDC, this is not live.
For Compute availability, you can set up Traffic Manager to load-balance across data centers. This feature is currently in CTP - visit the Beta area of the Windows Azure portal to sign up.
Remember that, with DR, whether in the cloud or on-premises, there are additional costs (such as bandwidth between data centers, storage costs for duplicate data in a secondary data center, and Compute instances in additional data centers). .
Just like with on-premises environments, DR needs to be carefully thought out and implemented.
David's answer is pretty good, but one piece is incorrect. For Windows Azure blobs and tables, your data is actually geographically replicated today between sub-regions (e.g. North and South US). This is an async process that has a target of about a 10 min lag or so. This process is also out of your control and is purely for a data center loss. In total, your data is replicated 6 times in 2 different data centers when you use Windows Azure blobs and tables (impressive, no?).
If a data center was lost, they would flip over your DNS for blob and table storage to the other sub-region and your account would appear online again. This is true only for blobs and tables (not queues, not SQL Azure, etc).
So, for a true disaster recovery, you could use Data Sync for SQL Azure and Traffic Manager for compute (assuming you run a hot standby in another sub-region). If a datacenter was lost, Traffic Manager would route to the new sub-region and you would find your data there as well.
The one failure that you didn't account for is in the ability for an error to be replicated across data centers. In that scenario, you may want to consider running Azure PAAS as part of HP Cloud offering in either a load balanced or failover scenario.

Resources