Is network bandwidth on Azure Web Sites different from Virtual Machines? - azure-web-app-service

From answers to other questions (such as this question), it sounds like different instance sizes offer different network throughput. My processing is I/O bound, and I'm trying to use web jobs to do it on a web site instance. Do web sites offer the same bandwidth as VMs with the same size/price point? Or if I need bandwidth higher than 100 Mb/sec, would I need to choose a solution other than web sites to do this processing?
Thanks,
David

Unfortunately, the bandwidth limits are not currently exposed.
At the end of the day, Azure App Service is using some Cloud Services machines and the bandwidth should be quite similar than in Web/Worker roles.
However, the requests go through different mechanisms (IIS ARR for example) but it might not add so much overhead.
That being said, the best way would be to try and scale out (using multiple instances) if you need more.
I hope this helps!

Adding to #dmatson answer a small detail - right now, expect SLA for high availability, which means that you can have different numbers sometimes. You will need to wait for the official release of SLA - or scale out by the size or amount. The very good FAQ i have found on that topic is here, many networking-related questions are covered.
https://blogs.msdn.microsoft.com/igorpag/2014/09/28/my-personal-azure-faq-on-azure-networking-slas-bandwidth-latency-performance-slb-dns-dmz-vnet-ipv6-and-much-more/

Related

cloud terminology, elasticity vs scalability and compute vs networking

first of all would like to say a big thank you to all who is reading my questions, I really appreciate your time and hope to be able contribute back, second, I did see part of my question already asked in another thread but it does not answer it plus my questions have a bit different angle, so here it goes:
Does not elasticity already include scalability? I see scalability and elasticity go as two separate features of the cloud in service promotions, is there a technical difference or is it just marketing play of terminology?
I have similar confusion about compute and networking, does not compute power already include networking, I saw it being briefly displayed as two separate advantages of cloud service
I will give it a try :) But it will largely be my understanding and less citing of provider documentation.
Elasticity vs. Scalability
I interprete elasticity as the capability to react to more or less daily variation in resource needs. Unlike reserved instances or your own server hardware "in the basement" the cloud provider offer both the resources and the managment tools to let you use varying amounts of compute, network , ... resources from hour to hour or day to day.
So elasticity (in my mind) solves the business need to react / adapt to changing demand that might follow a pattern like day / night or season / off-season but might be relatively stable from year to year or even week to week.
Scalability in my mind is more than everything the ability of these "hyper-scalers" to allow customers / you to grow your system continously and almost with no upper limit. So I would say the average (i.e.) weekly usage can go up every week for months after months and you wouldn't run out of upgrade options with the cloud providers to help you serve more and more requests.
Compute and Networking
The cloud uses "software defined networking" which abstracts all that hardware stuff like switches and routers from you as a user and offers connectivity options that would be hard to realize on your own / with traditional networking. So the networking capabilities of a major cloud provider are a feature set of their own, with lots of room for system improvements and capabilities. Therefor it is designed, serviced, billed... separately from other service classes like compute or storage.
A simple example or illustration of that might be a virtual machine (or multiple) that on their own as stand alone compute resources might have a network interface and a public ip attached to them. You can reach that machine, the machine can reach to internet (if you configure it that way) and you can install stuff on it. That's it - you have compute power.
But when you group virtual machines in i.e. application security groups and use these groups as objects in resources that allow, deny, redirect traffic internally or externally and maybe tunnel traffic to these compute resources to your on-premise resources (like in many cases Active Directory Domain Services) you start to use advanced networking capabilites. But obviously there's much more and networking can be one of the hardest parts of certification exams on cloud topics.

Ways to reduce the latency achieved on my Virtual Machine in Azure

does anyone have any ways in which I could reduce the latency for my Azure WM (in-managed disk)? I currently have one running on the UK South area which is running at an average of 2.85m/s which is fantastic compared to the 16m/s I was receiving using my on-premises system.
However, if possible, I'd like to have this even lower, preferably down to 0.5m/s. Does anyone have any ways in which I could achieve this in the most cost-effective manner
Thanks
Azure VM size could effect latency. Different vm sizes have different bandwidth.
You could check this blog.
Well, to get 0.5ms you need to be in the range of, like, 50-60km (máximum) from the VM. Because physics, you know?
Having said that, the only way to reduce latency is use Express Route, but it probably wont help given you have <3ms latency already.

How many instances do I need for high availability with Azure "Web Sites"

How many instances do I need to configure to ensure that my site stays available during planned maintenance performed on the underlying OS/VM.
I understand the availability model for web roles, but I am not clear if it is the same for web sites on Azure.
With web roles, you have to configure at least 2 instances in separate upgrade domains to get the SLA from Microsoft and to ensure that your site is routinely available. This ensures that your site will stay available as Microsoft performs maintenance on the underlying OS (updating to a newer version of the OS image, etc).
What's the equivalent story for web sites? Do I need to have two instances of my web site or does Microsoft proactively move my site to a new VM before they perform maintenance (since web sites are more "managed" than web roles, that seems like it may be possible that they do this)?
Does the answer change between Free, Shared, and Reserved web sites?
Note, I understand that during sudden, unplanned downtime, having a single instance means my site will be unavailable until it is restarted on a new node. I am not worried about that for my low-volume hobby site. What I am more interested in is the routine, planned maintenance activities that are much more common than unplanned failures of the VM or host hardware.
Edit for clarification: Clearly, having 2 (or more) reserved instances is going to be the best option for high availability, but that is cost prohibitive for a hobby site at just short of $120 per month. My question really is if a single Shared or Reserved instance is going to have routine downtime for planned maintenance. I'm specifically wondering if anyone has concrete information on this (from a blog post I may have missed or from a phone call with the Microsoft support guys, etc). Maybe the answer is "no one knows because Microsoft hasn't clarified how things will work outside of the preview yet".
I also don't want to get hung up on the term "High Availability". I guess I am just looking for "Not Low Availability". It's just a hobby site, after all.
Please note that Azure Web Sites are still in preview. That means that there is no SLA what so ever. When the Web Sites come out of preview, I would suggest having at least 2 reserved instances for high-availability.
Both Free and Shared instance imply usage quotas - CPU/Memory/Bandwitdth (Shared has no quota on the bandwidth, but still apply quota on CPU and Memory).
Having usage quotas in place is controversial to High Availability in my understanding for that term. That's why I suggest Reserved. It is same for the number of instances.
There is no SLA since Web Sites are still in preview, but I also wondered about downtime during updates and found an explanation on twitter from a member of the Azure team:
https://twitter.com/nirmsk/status/342087643779198977
Apparently they keep a "buffer" of machines that are used if the current machine needs to be rebooted, crashes, or you change the scaling options (between Shared and Reserved).
Having multiple instances won't solve the problem since any change you make on one instance is automatically propagated to all other instances.
However Azure websites now supports having a staging version of your website and swapping between staging and production versions of the site. That would be ideal for your scenario. See here for more info: http://azure.microsoft.com/en-us/documentation/articles/web-sites-staged-publishing/

Many sites in a WebRole : limits?

I know that we can have one WebRole for many sites using "Sites" section.
I'd like to know the limit of this : In which cases is it better to create a new WebRole ?
The documentation claims that you have "full IIS" - so the limits will be quite high - I've seen inside one shared hosting setup with well over a hundred sites running on the same IIS box.
At a practical level, the limits will depend on:
how busy your individual sites are - how many resources each one requires in terms of network bandwidth, CPU, RAM, and disk space.
how hard it's going to be to administer all your sites as one entity - it can be quite hard to synchronise the upgrade of multiple sites at a development team level, and it can generate additional testing (are you sure that upgrading site B hasn't changed sites A and C?)
whether you want all your sites to scale together horizontally or whether you want to scale them independently.

hardware infrastructure for public web application

I'd like to start a free budget/personal finance site and will need plenty of horsepower and storage. I'm definitely a nubee, so how does one get started in terms of hardware infrastructure? Do I need to get a dedicated IP from my ISP and obtain my own servers? Do I go with amazon or Sql Server Data Services/Azure or something like that? Is the latter services free or a discount offering available to non-profit/free services such as the budget/personal finance site I'm looking to start?
If you don't mind writing your web application in python, then I's suggest using Google App Engine. See: What Is Google App Engine?
What I like to do when I have new ideas for a site is to find an inexpensive hosting solution ($10 per month). This allows me to test the idea and see if the site is going to be successful. If it is a flop, I haven't wasted much money and if it is successful I can upgrade to better hosting (dedicated server).
There are many hosting options available and several of them have great tools such as an online SQL Server management studio. Your other option would be to host it yourself if you are prepared to deal with firewall issues, backups, storage, etc.
Whether it is feasible to DIY varies a lot by country...if you have a decent broadband connection with a fixed IP this can be the cheapest route to play around with first, especially if you need an awful lot of storage.
Note however that many fast broadband connections are only fast for downloads - when you're running a server, the speed your users will see is the upload speed, which is usually a lot less. Also, you'll need to do your own admin and backup etc.
Apart from this most hosting options have a price tag on top, varying from virtual hosts (sharing a real machine), to colocation (your machine in somebody's data center), to cloud services like amazon et al (which have a good scaling ability)- and you will need to shop around for the software stack and hardware features you really need.
There's really two ways to answer this question, what differentiates them is budget.
One is to properly design this solution, prototype it, benchmark the prototype, extrapolate anticipated user load, add overhead and scale accordingly. This takes time, costs but gives you a supportable solution that serves your customers well.
The other is to just give something, anything a go and fix the problems as they come along. This is quicker and cheaper but might be a headache for a while and might p*** off your customers.
Basically it comes down to budget.
Best of luck.

Resources