Data sovereignty vs Azure hosting options - azure

I'm just learning about Azure so forgive me for my naivety. I work for a federal government that would be very hesitant to have their applications and data hosted in another country. Could a local company offer "Azure" services? i.e. could software developers in a government department build their applications and deploy them to the Azure cloud, ensuring that their data stays within the country? Or would they have to look at a non-Microsoft cloud provider?

Data and Compute will reside in the datacenter you specify. Blobs, Tables and Queues are also backed up automatically to a paired data center:
San Antonio <--> Chicago
Dublin <--> Amsterdam
Hong Kong <--> Singapore
You can opt-out of cross-datacenter data backup if data sovereignty becomes an issue. Once opted-out, data would only be in the specified data center, and you'd need to handle DR on your own (by possibly backing up data to on-premises storage).
Aside from those 6 datacenters, Fujitsu runs a Windows Azure data center in Japan. See this press release for more info.

Yes, when you create your Azure service you can specify what region (of the country) it runs in.

I'm not sure if you know this, but the Federal CIO (Vivek Kundra) is really pushing hard for Agencies to move to the cloud. You might want to check out Info.Apps.Gov for guidelines on the Federal Cloud initiative and resources for what you can and can't do.
To answer your immediate question: No. Only MS hosts Azure to my knowledge. I do know that Amazon is bending over backwards however to accommodate Government clients and you can control which datacenters are used on that service. MS appears to have a similar capability per the other answer to this question.

As far as I can tell, these are the only locations:
http://en.wikipedia.org/wiki/Azure_Services_Platform#Datacenters
If they're that concerned about data security though, they should deal directly with Microsoft, not buy Azure services that same way a client usually would. Microsoft may be able to arrange something depending on budget (but probably not).
Edit: What I'm basically saying is, Microsoft is not going to arbitrarily do special licensing. Meaning you either need a large enough budget to convince MS to build a data center in your country, or you need some other way of convincing MS to allow Azure services hosted in your country. Also, I hate to sound paranoid, but if you're worried about America seeing your data, you likely should avoid Ameican companies.

If there isn't a Windows Azure Data Centre in the relevant country, but you still want to use Azure, you'll need to look at a hybrid cloud model where data remains resident in a private cloud. However, in-flight data can still present complications for some organisations and Azure may not be the right answer in all cases.
If you like, I can talk about it some more using Chat. The company I work for specialises in just these cases and has the only production Windows Azure data centre that isn't owned by Microsoft (and isn't in the US). Probably best not go into further specifics here, though, for fear of my answer looking like pure spam!

Related

move services between windows azure datacenters

I believe that there's no easy way, but it doesn't hurt ask for you guys.
Is there a way to move services (websites, cloudaps, etc) between datacenters without re-create and re-deploy them?
For example: I have a website in West US and I want to move to Europe.
(by the way I suggest this to azure team and I hope you can vote for it http://feedback.windowsazure.com/forums/34192-general-feedback-/suggestions/5071811-move-services-from-datacenters)
As Gaurav suggested, you can try to put in a support ticket with Microsoft. I'm assuming you'll need a support plan though in order to get that through, since this isn't really a billing related issue.
Other than going that route, there is nothing today that will move services (websites, cloud services, etc.) from one datacenter to another.

What's the point of Azure Add-Ons?

Windows Azure has a store.
The stuff you can by there are called Add-Ons, and they fall in two categories: Service and data.
I understand the point of some of the service offerings, but not all, and I don't yet understand the point of the data offerings at all.
With services, some offerings are database deployments such as ClearDB (MySQL) and MongoLab. That makes sense to me: You get those databases deployed and monitored with a few clicks, yet those databases run in the same data center as the applications that consume them, which is good for performance and security.
For most other services (there is a simple scheduler application, for example), it seems that the only advantage is the unified billing method. Is that a correct observation, or is there more to it?
Then the data offerings: The fact that I can buy bing query transactions cannot really have anything to do with the rest of my azure account, right? Technically, it's just bing (or whatever other data offering you look at) and presumably I'm going against the same bing api that I would have used previously (I'm assuming that was possible). There is nothing really deployed in any Azure data center the moment I buy it, is there? So in what sense is that an Add-On?
In a nutshell, am I missing something or are most Add-Ons just a method of buying external services and having the billed on my Azure account?
If you can answer the question for other 'app stores', you can answer it for Windows Azure. We know about THE App Store (as per the court battles over the name) which is the only way to get applications onto the closed (iOS) device. There is also a Mac App Store which would seem unnecessary because of the ability to install apps by yourself (which makes it more similar to the Azure store). In this case the reason for the store is discoverability, association with the store brand (where the buyer assumes a degree of vetting), a single point for updates, and simplified billing.
The Windows Azure Store (and data marketplace) exist for similar reasons. It is less about the technical benefits than the association with the Azure brand. Since SO is technical, let me highlight some (largely) technical aspects:
Don't assume that the service will run in the same data centre. In most cases it probably won't.
There is an advantage of having everything in one place from an operational point of view. Granting of operator access to the subscription means that you don't have to administer accounts on the service. I have had problems with this though - where the service made it difficult to do other things (such as get support) because the Azure identity wasn't handled very well. (I had this with New Relic).
The combined billing works on credit card payments only. Last time I checked (Summer 2013) there was no way to get an add-on with a pay-by-invoice subscription, so a second subscription (with credit card) was needed anyway.
Add-ons seems to still be in 'preview', which may indicate low adoption. Microsoft probably hasn't seen it grow the way they expected and may not be developing it much in future. This is opinion only, and shouldn't affect the service (after all the store is just a gateway, and has no (little) technical impact on the service provided)
Don't completely ignore the store however. The biggest benefit seems to be the free tier of the servers and reduced pricing, where Microsoft has managed to get service providers to make the store attractive. For example, the SendGrid free option provides 25,000 emails per month, and there doesn't seem to be a free option on SendGrid.com. New Relic pricing was (and maybe still is) significantly less.
Pay attention mainly to the pricing benefits, rather than perceived technical benefits.

Change cloud service region

Is it possible to change a Cloud Service region (i.e: move from East US to West US)?
I don't see an option from the management console to do it or maybe I did not dig deep enough.
I would like to do it since I have my database in one region different to my application's and I guess it could decrease performance.
Thanks,
No, there is no way to change Cloud Service region. You have to create new Cloud Service in desired region and redeploy there. It becomes more complicated when you also have Storage accounts with data which you have to move. For this you could probably use Red Gate's Cloud Services or other mature product.
And you are right about Database and performance. It is not only performance, but also costs savings. When your Database is in different geographic region all data that comes out of your database is basically Outbound (Egress) traffic, which is being charged per GB!
You can make your own script using Powershell that is a powerful tool and can help you a lot, including copying the data between regions directly (not passing by your computer). I am going to do that know.

Doubts about Windows Azure Platform Introductory Special

I'm considering to join the Windows Azure Platform Introductory Special, but I'm a little bit afraid of losing money with it. I don't wanna develop any fancy large scale application, I want to join just to learn Azure and do my experiments, what should I be afraid of?
In the transference, it says: "Data Transfers (per region)", what does that mean?
Can I put limits to stop the app if it goes over this plan in order to avoid get charged?
Can it be "pre pay" instead "bill pay"?
Would it be enough for a blog?
Any experiencie so far?
Kind regards.
As ligget pointed out, Azure isn't cost affect as a host for an application that can be easily deployed to a traditional shared hosting provider. Azure's target market are those that want dedicated resources without the need to micro-manage the infrasture and the capability to easily scale up/down based on demand.
That said, here's the answers to the questions you posted:
Data Transfers are based on bandwidth in and out of the hosting data center. bandwidth for communication occuring within components (SQL Azure, Windows Azure, Azure Storage, etc...) in the same datacenter are not billable.
Your usage is not currently capped when the free quotas are used up. However, you will recieved warning emails when those items approach their usage threadsholds.
There is the option to pay your subscription using a PO, but the minimum threshold for most of these operations is $500/month. So as a hobbyist, its unlikely you're wanting that route.
The introductory special does not provide enough resources for hosting a 24x7 personal blog. That level includes only 25hrs of compute resources. Each hour a single instance of your application is deployed will count against this, even if the application received no traffic. Think of it like renting office space. You still pay rent on the office even if there are no customers there.
All this said, there's still much to be learned with the introductory special. The azure development tools allows you to work with Windows Azure and Azure storage locally and get a feel for how they work. The introductory special then lets you deploy those solutions so you can see what works and what doesn't (not everything that works locally works hosted).
I would recommend you host your blog somewhere else - it's a waste of resources running it on Azure and you'll find much cheaper options. A recently introduced extra small instance would be a better choice in this case, but AFAIK it is charged separately as of now, e.g. even when you have an MSDN subscription those extra small instance hours do not count towards free Azure hours that come with the subscription.
There is no pre-pay option I know of and it's not possible to stop the app automatically. It'll be running until the deployment is deleted (beware! even if suspended/stopped the deployment will continue to accrue charges). I believe you will be sent a notification shortly before reaching your free hours threshold.
Be aware that when launching more than 1 instance you are charged for every hour of every instance combined. This can happen for example when you have more than one role in your Azure project (1 web role + 1 worker role - a separate instance will be started for each role).
Data trasfer means your entire data trasfer: blobs/Table storage/queues (transfers between your hosted service and storage account inside the same data center are free) + whatever data is transfered in/out of your hosted application, e.g. when somebody visits your pages. When you create storage accounts and hosted services in Azure you will specify a region that will be hosting your account/app - hosting in Asia is slightly more expensive than in Europe/U.S.
Your best bet would be to contact Microsoft with these questions.

Minimize downtime in Azure

We are experiencing a very serious unscheduled downtime of our Azure application today for what is now coming up to 9 hours. We reported to Azure support and the ops team is actively trying to fix the problem and I do not doubt that. We managed to get our application running on another "test" hosted service that we have and redirected our CNAME to point at the instance so our customers are happy, but the "main" hosted service is still unavailable.
My own "finger in the air" instinct is that the issue is network related within our data center (west europe), and indeed, later on in the day the service dash board has gone red for that region with a message to that effect. (Our application is showing as "Healthy" in the portal, but is unreachable via our cloudapp.net URL. Additionally threads within our application are logging sql connection exceptions into our storage account as it cannot contact the DB)
What is very strange, though, is that the "test" instance I referred to above is also in the same data centre and has no issues contacting the DB and it's external endpoint is fully available.
I would like to ask the community if there is anything that I could have done better to avoid this downtime? I obeyed the guidance with respect to having at least 2 roles instances per role, yet I still got burned. Should I move to a more reliable data centre? Should I deploy my application to multiple data centres? How would I manage the fact that my SQL-Azure DB is in the same datacentre?
Any constructive guidance would be appreciated - being a techie, I've never had a more frustrating day being able to do nothing to help fix the issue.
There was an outage in the European data center today with respect to SQL Azure. Some of our clients got hit and had to move to another data center.
If you are running mission critical applications that cannot be down, I would deploy the application into multiple regions. DNS resolution is obviously a weak link right now in Azure, but can be worked around (if you only run a website it can be done very simply using Response.Redirects or similar)
Now, there is a data synchronization service from Microsoft that will sync up multiple SQL Azure databases. Check here. This way, you can have mirror sites up in different regions and have them be in sync with SQL Azure perspective
Also, be a good idea to employ a 3rd party monitoring service that would detect problems with your deployed instances externally. AzureWatch can notify or even deploy new nodes if you choose to, when some of the instances turn "Unresponsive"
Hope this helps
I can offer some guidance based on our experience:
Host your application in multiple data centers, complete with Sql Azure databases. You can connect each application to its data center specific Sql Server. You can also cache any external assets (images/JS/CSS) on the data center specific Windows Azure machine or leverage Azure Blog Storage. Note: Extra costs will be incurred.
Setup one-way SQL replication between your primary Sql Azure DB and the instance in the other data center. If you want to do bi-rectional replication, take a look at the MSDN site for guidance.
Leverage Azure Traffic Manager to route traffic to the data center closest to the user. It has geo-detection capabilities which will also improve the latency of your application. So you can redirect map http://myapp.com to the internal url of your data center and a user in Europe should automatically get redirected to the European data center and vice versa for USA. Note: At the time of writing this post, there is not a way to automatically detect and failover to a data center. Manual steps will be involved, once a failover is detected and failover is a complete set (i.e. you will failover both the Windows Azure AND Sql Azure instances). If you want micro-level failover, then I suggest putting all your config the in the service config file and encrypt the values so you can edit the connection string to connect instance X to DB Y.
You are all set now. I would create or install a local application to detect the availability of the site. A better solution would be to create a page to check for the availability of application specific components by writing a diagnostic page or web service and then poll it from a local computer.
HTH
As you're deploying to Azure you don't have much control about how SQL server is setup. MS have already set it up so that it is highly available.
Having said that, it seems that MS has been having some issues with SQL Azure over the last few days. We've been told that it only affected "a small number of users". At one point the service dashboard had 5 data centres affected by a problem. I had 3 databases in one of those data centres down twice for about an hour each time, but one database in another affected data centre that had no interruption.
If having a database connection is critical to your app, then the only way in the Azure environment to ensure against problems that MS haven't prepared against (this latest technical problem, earthquakes, meteor strikes) would be to co-locate your sql data in another data centre. At the moment the most practical way to do this is to use the synch framework. There is an ability to copy SQL Azure databases, but this only works within a data centre. With your data located elsewhere you could then point your app at the new database if the main one becomes unavailable.
While this looks good on paper though, this may not have helped you with the latest problem as it did affect multiple data centres. If you'd just been making database copies on a regular basis, that might have been enough to get you through. Or not.
(I would have posted this answer on server fault, but I couldn't find the question)
This is just about a programming/architecture issue, but you amy also want to ask the question on webmasters.stackexchange.com
You need to find out the root cause before drawing any conclusions.
However. my guess one of two things was the problem
The ISP connectivity differs for the test system and your production system. Either they use different ISPs, or different lines from the same ISP. When I worked in a hosting company we made sure that ou IP connectivity went through at least two different ISPS who did not share fibre to our premises (and where we could, they had different physical routes to the building - the homing ability of backhoes when there's a critical piece of fibre to dig up is well proven
Your datacentre had an issue with some shared production infrastructure. These might be edge routers, firewalls, load balancers, intrusion detection systems, traffic shapers etc. These typically are also often only installed on production systems. Defences here involve understanding the architecture and making sure the provider has a (tested!) DR plan for restoring SOME service when things go pair shaped. Neatest hack I saw here was persuading an IPS (intrusion prevention system) that its own management servers were malicious. And so you couldn't reconfigure it at all.
Just a thought - your DC doesn't host any of the Wikileaks mirrors, or Paypal/Mastercard/Amazon (who are getting DDOS'd by wikileaks supporters at the moment)?

Resources