Using Windows Azure In Europe and the middel East - azure

I've built my application in .net and SQL Server 2008.
Having looking for a hosting solution I stumbled upon windows azure.
I saw that currently its only available in the US.
Can I use the service if I live outside of the US?
If I upload my website up there and people try entering, will people from outside of the US be blocked?
Sorry for posting an unrelated program question. I am not receiving an answer anywhere else, and I can see that there are several questions regarding azure which are not program related here.

Windows Azure has a data center hosted in northern europe. Your users won't be blocked no matter where it is hosted. See this link for status and locations.

We have an Azure hosted application in the US. One of our developers is in Pakistan. He has no issues developing against our Azure Table Storage there or using the application. Also, he is impressed with the overall speed of the application compared to other web applications he uses which are hosted locally and in the US.
Obviously an app hosted in the same region would be quicker all things being equal. However, we have been really happy with the "quality" of the service from Azure and overall it probably offers better performance even outside the region than a poorly managed shared hosting environment. Also, you can change the region where your Azure app is hosted. So, over time, as new regions are added you can migrate your app to that region.

Nope it won't be blocked. But, it would be more sluggish due to latency compared to the locally hosted applications. Also, if you are in EU you might want to check the data protection act. It is illegal to store some private data concerning EU citizens on the US based servers.

Related

Moving to IasS on MS Azure

We have got an application running fine on On premises and plan to move it to IaaS on Ms Azure, do we need to make any changes to it or will it work as is?
I agree with the above post. You have not detailed if you are using Virtual Machines (Sql server or going to use Azure SQL). You will have to make choices about fail-over and geo redundancy, cloud services, etc. There are IP restrictions that may affect you (I don't know since I am not sure what you are moving). More than anything, I always warn people about the cost, it is difficult to understand. Here is an article series I wrote on Azure & SharePoint, you can skip the SharePoint stuff but the cost/limitation/VMs and such would still apply.
http://www.matthewjbailey.com/sharepoint-azure-guide/
We've managed a lift-and-shift of an on-premise Windows app into Azure, but I wouldn't say it's been without its pain. The above comments definitely ring true; you need to provide a bit more of an overview of what the current application does so that people can help answer your question.
In my experience, the only stumbling blocks to moving on-premise into Azure are:
Hardware requirements; i.e. if your application requires some specific hardware
Cost: It's not always cheaper to move large systems into Azure
Licensing: Make sure that your existing licensing is compatible with a cloud system which you don't control

When should I choose "Cloud Service" over "Virtual Machine" on azure

Looking into it I came to find out that a 1 role of 1 small compute Cloud Service is almost 60% more expensive as the same 1 small "vitual machine"...
So why should i choose to use cloud service over virtual machine?
Searching the web I came across a lot articles about this including this article but none were clear enough for me... the comparison in the last one is plain useless in my opnion...
Is there a "perk" that i don't know about or is not being considered? something to justify the "extra-charge" for cloud service... Does a code running on cloud service perform better than running on a virtual machine (maybe because there's less overhead)? anything?
I think virtual machine would be used when we need to migrate our application to the cloud and make it 'just work'. We don't need many additional effort to move our legacy code to azure if using virtual machine. But it doesn't provide the rich PaaS features comparing with cloud service such as automatically deployment, automatically update, load balance, etc.
So if we have a legacy system and we wanted to quick move to azure, then we can choose virtual machine. But if we need to manage bunch of machines, cloud service would helps us a lot and make us focus on the business logic.

Programming on a normal IIS web host + SQL vs. AZURE + AZURE SQL (just a hobby) . Similiar costs?

I am using a normal IIS web host to host my website and web services.. It is just a hobby and I get very little traffic. I would let to be using Azure instead since I would like an excuse to learn azure.
Is anyone out there using Azure in this way and can tell me about what is thier monthly cost? I long ago subscribed to azure and forgot about it and a month later had a $90 bill so that really scares me.
Right now my web host + sql is about $25 a month.
Is there a way to have azure shut the service off if it gets over a certain monthly cost?
Well, even a very small instance costs $0.05/hour, and the SLA is only guaranteed if you run two or more instances. That in itself adds up to approximately $75/month.
SQL Azure is at least $9.99/month. Add to this charges for traffic, etc.
There are reasons why the SQL Azure pricing model is like this. You do get your very own virtual machine instances with dedicated RAM, which you typically don't get in a web hotel, so taking that into account, the Azure price may be reasonable, but isn't very competitive for very small hobby sites.
The official price list is here: http://www.microsoft.com/windowsazure/pricing/
Unfortunately, Azure is not designed to host hobbyist sites. You won't be able to beat $25 a month, but then you don't need things like SLA's and HA databass. But, as I commented earlier, it is nice to be able to work with it to train up on the platform.
There are ways of getting onto Azure cheaply.
Firstly There is the free introductory offer. Very much a "toe in the water" just to play/learn the platform. There aren't enough compute hours to host a site.
Secondly, if you're prepared to put in a little effort you can join either the partner or Bizspark programs which will give you access to enough resources to host a site for free, but there is an expectation that you're trying to build "something".
Oh, and for a hobbyist site you don't need the SLA so a single instance is fine.

Doubts about Windows Azure Platform Introductory Special

I'm considering to join the Windows Azure Platform Introductory Special, but I'm a little bit afraid of losing money with it. I don't wanna develop any fancy large scale application, I want to join just to learn Azure and do my experiments, what should I be afraid of?
In the transference, it says: "Data Transfers (per region)", what does that mean?
Can I put limits to stop the app if it goes over this plan in order to avoid get charged?
Can it be "pre pay" instead "bill pay"?
Would it be enough for a blog?
Any experiencie so far?
Kind regards.
As ligget pointed out, Azure isn't cost affect as a host for an application that can be easily deployed to a traditional shared hosting provider. Azure's target market are those that want dedicated resources without the need to micro-manage the infrasture and the capability to easily scale up/down based on demand.
That said, here's the answers to the questions you posted:
Data Transfers are based on bandwidth in and out of the hosting data center. bandwidth for communication occuring within components (SQL Azure, Windows Azure, Azure Storage, etc...) in the same datacenter are not billable.
Your usage is not currently capped when the free quotas are used up. However, you will recieved warning emails when those items approach their usage threadsholds.
There is the option to pay your subscription using a PO, but the minimum threshold for most of these operations is $500/month. So as a hobbyist, its unlikely you're wanting that route.
The introductory special does not provide enough resources for hosting a 24x7 personal blog. That level includes only 25hrs of compute resources. Each hour a single instance of your application is deployed will count against this, even if the application received no traffic. Think of it like renting office space. You still pay rent on the office even if there are no customers there.
All this said, there's still much to be learned with the introductory special. The azure development tools allows you to work with Windows Azure and Azure storage locally and get a feel for how they work. The introductory special then lets you deploy those solutions so you can see what works and what doesn't (not everything that works locally works hosted).
I would recommend you host your blog somewhere else - it's a waste of resources running it on Azure and you'll find much cheaper options. A recently introduced extra small instance would be a better choice in this case, but AFAIK it is charged separately as of now, e.g. even when you have an MSDN subscription those extra small instance hours do not count towards free Azure hours that come with the subscription.
There is no pre-pay option I know of and it's not possible to stop the app automatically. It'll be running until the deployment is deleted (beware! even if suspended/stopped the deployment will continue to accrue charges). I believe you will be sent a notification shortly before reaching your free hours threshold.
Be aware that when launching more than 1 instance you are charged for every hour of every instance combined. This can happen for example when you have more than one role in your Azure project (1 web role + 1 worker role - a separate instance will be started for each role).
Data trasfer means your entire data trasfer: blobs/Table storage/queues (transfers between your hosted service and storage account inside the same data center are free) + whatever data is transfered in/out of your hosted application, e.g. when somebody visits your pages. When you create storage accounts and hosted services in Azure you will specify a region that will be hosting your account/app - hosting in Asia is slightly more expensive than in Europe/U.S.
Your best bet would be to contact Microsoft with these questions.

Minimize downtime in Azure

We are experiencing a very serious unscheduled downtime of our Azure application today for what is now coming up to 9 hours. We reported to Azure support and the ops team is actively trying to fix the problem and I do not doubt that. We managed to get our application running on another "test" hosted service that we have and redirected our CNAME to point at the instance so our customers are happy, but the "main" hosted service is still unavailable.
My own "finger in the air" instinct is that the issue is network related within our data center (west europe), and indeed, later on in the day the service dash board has gone red for that region with a message to that effect. (Our application is showing as "Healthy" in the portal, but is unreachable via our cloudapp.net URL. Additionally threads within our application are logging sql connection exceptions into our storage account as it cannot contact the DB)
What is very strange, though, is that the "test" instance I referred to above is also in the same data centre and has no issues contacting the DB and it's external endpoint is fully available.
I would like to ask the community if there is anything that I could have done better to avoid this downtime? I obeyed the guidance with respect to having at least 2 roles instances per role, yet I still got burned. Should I move to a more reliable data centre? Should I deploy my application to multiple data centres? How would I manage the fact that my SQL-Azure DB is in the same datacentre?
Any constructive guidance would be appreciated - being a techie, I've never had a more frustrating day being able to do nothing to help fix the issue.
There was an outage in the European data center today with respect to SQL Azure. Some of our clients got hit and had to move to another data center.
If you are running mission critical applications that cannot be down, I would deploy the application into multiple regions. DNS resolution is obviously a weak link right now in Azure, but can be worked around (if you only run a website it can be done very simply using Response.Redirects or similar)
Now, there is a data synchronization service from Microsoft that will sync up multiple SQL Azure databases. Check here. This way, you can have mirror sites up in different regions and have them be in sync with SQL Azure perspective
Also, be a good idea to employ a 3rd party monitoring service that would detect problems with your deployed instances externally. AzureWatch can notify or even deploy new nodes if you choose to, when some of the instances turn "Unresponsive"
Hope this helps
I can offer some guidance based on our experience:
Host your application in multiple data centers, complete with Sql Azure databases. You can connect each application to its data center specific Sql Server. You can also cache any external assets (images/JS/CSS) on the data center specific Windows Azure machine or leverage Azure Blog Storage. Note: Extra costs will be incurred.
Setup one-way SQL replication between your primary Sql Azure DB and the instance in the other data center. If you want to do bi-rectional replication, take a look at the MSDN site for guidance.
Leverage Azure Traffic Manager to route traffic to the data center closest to the user. It has geo-detection capabilities which will also improve the latency of your application. So you can redirect map http://myapp.com to the internal url of your data center and a user in Europe should automatically get redirected to the European data center and vice versa for USA. Note: At the time of writing this post, there is not a way to automatically detect and failover to a data center. Manual steps will be involved, once a failover is detected and failover is a complete set (i.e. you will failover both the Windows Azure AND Sql Azure instances). If you want micro-level failover, then I suggest putting all your config the in the service config file and encrypt the values so you can edit the connection string to connect instance X to DB Y.
You are all set now. I would create or install a local application to detect the availability of the site. A better solution would be to create a page to check for the availability of application specific components by writing a diagnostic page or web service and then poll it from a local computer.
HTH
As you're deploying to Azure you don't have much control about how SQL server is setup. MS have already set it up so that it is highly available.
Having said that, it seems that MS has been having some issues with SQL Azure over the last few days. We've been told that it only affected "a small number of users". At one point the service dashboard had 5 data centres affected by a problem. I had 3 databases in one of those data centres down twice for about an hour each time, but one database in another affected data centre that had no interruption.
If having a database connection is critical to your app, then the only way in the Azure environment to ensure against problems that MS haven't prepared against (this latest technical problem, earthquakes, meteor strikes) would be to co-locate your sql data in another data centre. At the moment the most practical way to do this is to use the synch framework. There is an ability to copy SQL Azure databases, but this only works within a data centre. With your data located elsewhere you could then point your app at the new database if the main one becomes unavailable.
While this looks good on paper though, this may not have helped you with the latest problem as it did affect multiple data centres. If you'd just been making database copies on a regular basis, that might have been enough to get you through. Or not.
(I would have posted this answer on server fault, but I couldn't find the question)
This is just about a programming/architecture issue, but you amy also want to ask the question on webmasters.stackexchange.com
You need to find out the root cause before drawing any conclusions.
However. my guess one of two things was the problem
The ISP connectivity differs for the test system and your production system. Either they use different ISPs, or different lines from the same ISP. When I worked in a hosting company we made sure that ou IP connectivity went through at least two different ISPS who did not share fibre to our premises (and where we could, they had different physical routes to the building - the homing ability of backhoes when there's a critical piece of fibre to dig up is well proven
Your datacentre had an issue with some shared production infrastructure. These might be edge routers, firewalls, load balancers, intrusion detection systems, traffic shapers etc. These typically are also often only installed on production systems. Defences here involve understanding the architecture and making sure the provider has a (tested!) DR plan for restoring SOME service when things go pair shaped. Neatest hack I saw here was persuading an IPS (intrusion prevention system) that its own management servers were malicious. And so you couldn't reconfigure it at all.
Just a thought - your DC doesn't host any of the Wikileaks mirrors, or Paypal/Mastercard/Amazon (who are getting DDOS'd by wikileaks supporters at the moment)?

Resources