I'm trying to understand the correct way when hosting a web service using Windows Azure.
After reading some of the documentation available, I have reached these lines:
Windows Azure takes the following actions if a subscription's resource usage quotas are exceeded in a quota interval (24 hours):
Data Out - when this quota is exceeded, Windows Azure stops all web sites for a subscription which are configured to run in Shared mode for the remainder of the current quota interval. Windows Azure will start the web sites at the beginning of the next quota interval.
CPU Time - when this quota is exceeded, Windows Azure stops all web sites for a subscription which are configured to run in Shared mode for the remainder of the current quota interval. Windows Azure will start the web sites at the beginning of the next quota interval.
I was always under the impression that using cloud solution will prevent such events, as I really don't know a head of time what needs my web service will have, and that the cloud will provide the resources as needed (and off-course I will be charged for them) -
is that assumption is wrong?
EDIT
I found this great post that really explains Azure perfectly
Scott Hanselman - my own Q&A about Azure Websites and Pricing
If you are hosting the Windows Azure Website in the Shared mode, although you are paying, there are certain quotas that are in place because in the background you are basically sharing the resources with other websites which are hosted on the same Virtual Machine.
If you are hosting using the Standard mode, then you no longer have quotas and you will not experience this issue. As an added bonus, you can now setup Autoscale to automatically scale out your website under load.
Azure provides you different scalability levels according to the method of hosting you pick. For example if you host your web service on an azure web site you can't scale to thousands of servers. If you host your web services in a cloud service you can scale much further.
In Azure the scalability does not always happen transparently. In the case of a web service your choices are "azure web sites", "azure mobile services" and "azure cloud services". None of these will provide transparent scalability. You will need to define how you want scalability to be processed by azure. Most of the time you can do it in your azure management portal and define "Auto-Scaling" based on your pre-defined metrics as in "total amount of memory used" or "compute power used". Azure helps you gather metrics from a distributed environment, define scaling rules and scale without worrying about the underlying infrastructure but you will need to glue these pieces together as it defines how much you will get billed as well.
Hope this makes sense.
Related
I have a couple unrelated websites that I host using the free subscription of Microsoft Azure. They are currently all under the same Resource Group and same App Service plan (though I don't understand those concepts well enough to know if they should or shouldn't be).
In the past, I've had a problem with individual apps eating up large amounts of some resource, like CPU Time or Data Out. The issue is that they all share one quota... so if Website A has a problem that consumes all the CPU Time for the day, then Website A, Website B, and Website C ALL go down until the quota resets, even though Website B and Website C aren't contributing to the problem.
In short, how do I prevent that so that each website has its own quota (not a shared quota)? Can I separate the websites into different Resource Groups? Different App Service plans? I tried to understand Microsoft's documentation on how these quotas work, but I was unable to find the answer to my question.
Thanks!
I have a couple unrelated websites that I host using the free subscription of Microsoft Azure. They are currently all under the same Resource Group and same App Service plan (though I don't understand those concepts well enough to know if they should or shouldn't be).
Resource groups define a way to group multiple related azure resources together. Typically all resources in the same resource group are deployed together, for example using ARM/Bicep templates.
An App Service plan defines a set of compute resources for web apps to run.The free tier provides 60 minutes of cpu time per day. You can host multiple web apps on the same plan, as you do. But in that case they all consume parts of that 60 minute cpu time quota.
In short, how do I prevent that so that each website has its own quota (not a shared quota)? Can I separate the websites into different Resource Groups? Different App Service plans? I tried to understand Microsoft's documentation on how these quotas work, but I was unable to find the answer to my question.
If you do not want to share the cpu time between the multiple web apps on the same app service plan you can create a free tier App Service Plan per web app, and deploy each web app to their own dedicated free tier App Service Plan.
In our development environment (in Azure) we are experiencing an issue which we are sure could be due to bandwidth limitations of the underlying VM. By scaling our App Service up a pricing level (from Basic to Standard), the issue stops occurring. Utilisation of the CPU, memory, connection, threads, are all very low.
What are the actual network/bandwidth limitations for the different Azure App Service tiers?
These pages provide nothing on the matter:
Azure Web App sandbox
Azure subscription and service limits, quotas, and constraints
Could you elaborate on what the issue was that you were facing.
There are other restrictions for Azure App Services, which are defined here:
https://github.com/projectkudu/kudu/wiki/Azure-Web-App-sandbox
Most of the restrictions in Azure App Service are specific:
Size of the VM
Pricing Tier
The problem you described seems to be related to size than pricing tier.
For free tiers, well explained here:
App Service free quotas
All the free app services will stop working if bandwidth is the issue
Less than 200MB daily, about 5GB per month
Hi I have a web app deployed as a Cloud Service on Windows Azure. Now I am performing some load/stress test against this app. In the Azure Management Portal I have configured the web role to scale automatically when the CPU goes over 40%.
I start the tests with only one instance of this web role. As the test progresses, I have set the number of concurrent users to increase over time up to 2000 users.
After I start the test, I connect via remote desktop to the web role instance on Azure and I monitor the CPU usage. After 10 mins or so, the CPU is constantly at 100% (and in fact my requests in the test take a very long time to complete) but if I check the CPU of the very same web role on the Azure management portal it says 1, 2 or 6, there was a peak of 70% but it sunk back immediately (but never the values I see in its task manager when I am connected in remote desktop) or even does not display any value (I go to the dashboard page of my cloud service), which means the graph is not updated any more.
Furthermore, and this is the point of my question, NO SCALING of the web role instances is performed whatsoever.
Any ideas where/what I am missing? Feel free to ask if my explanation is incomplete.
Autoscaling on the CPU metric for a Cloud Service or Virtual Machine doesn't occur as fast as you are expecting (~10+ minutes). In this scenario, the CPU metric is averaged across all instances of the services for a period of 1 hour. Therefore, your autoscaling actions will not be immediate.
You can read more about this and some recommendations for configuring your autoscale settings here.
If you want to tighten this up a little more then take a look at this post where I show how to set the TimeWindow using the Monitoring Service Management Library. You may be able to get closer to what you want taking this approach.
A few things to consider:
1) As Rick pointed out, by default CPU is taken at an hour average
2) If you start at only 1 server, and then autoscale up to 2, your first server will get yanked out of load balancer during the scale operation. You should really always have a minimum of 2 servers at all time.
3) Feel free to check out AzureWatch (link in my profile).. it was designed to perform decently advanced scaling scenarios and allows you to configure scaling rules without touching APIs
The Azure management dashboard gives you the possibility to monitor metrics such as CPU utilization, network in/out, response time, among others.
But how can you measure consumption/availability of memory? I am running a web app that is memory intensive, and it is hard for me to gauge which instance types (or number of instances) I should provision without having an understanding of the memory situation across time.
Yes, my service is a web role on Azure cloud services, I am not talking about VMs (IaaS) here.
Thanks
In your Azure project, in the Roles folder you'll find a folder for each of your Roles. If you use the latest version of the SDK you'll find a file called diagnostics.wadcfg. This is where you'll be able to configure Performance Counters, like \Memory\Available Bytes. This file will also allow you to configure the sample rate (ex: every 30 sec) and the scheduled transfer period (how frequently the logs should be transferred to your Storage Account).
Then you can use a tool like the Azure Diagnostics Manager to view memory consumption over time.
More information: Using performance counters in Windows Azure
A way to do this from the Management Console:
On the Configure tab for your web role, in the monitoring section, change level to Verbose.
On the Monitor tab at the bottom, click Add Metrics
With monitoring set to Verbose, the available metrics should include Memory Available.
I'm considering to join the Windows Azure Platform Introductory Special, but I'm a little bit afraid of losing money with it. I don't wanna develop any fancy large scale application, I want to join just to learn Azure and do my experiments, what should I be afraid of?
In the transference, it says: "Data Transfers (per region)", what does that mean?
Can I put limits to stop the app if it goes over this plan in order to avoid get charged?
Can it be "pre pay" instead "bill pay"?
Would it be enough for a blog?
Any experiencie so far?
Kind regards.
As ligget pointed out, Azure isn't cost affect as a host for an application that can be easily deployed to a traditional shared hosting provider. Azure's target market are those that want dedicated resources without the need to micro-manage the infrasture and the capability to easily scale up/down based on demand.
That said, here's the answers to the questions you posted:
Data Transfers are based on bandwidth in and out of the hosting data center. bandwidth for communication occuring within components (SQL Azure, Windows Azure, Azure Storage, etc...) in the same datacenter are not billable.
Your usage is not currently capped when the free quotas are used up. However, you will recieved warning emails when those items approach their usage threadsholds.
There is the option to pay your subscription using a PO, but the minimum threshold for most of these operations is $500/month. So as a hobbyist, its unlikely you're wanting that route.
The introductory special does not provide enough resources for hosting a 24x7 personal blog. That level includes only 25hrs of compute resources. Each hour a single instance of your application is deployed will count against this, even if the application received no traffic. Think of it like renting office space. You still pay rent on the office even if there are no customers there.
All this said, there's still much to be learned with the introductory special. The azure development tools allows you to work with Windows Azure and Azure storage locally and get a feel for how they work. The introductory special then lets you deploy those solutions so you can see what works and what doesn't (not everything that works locally works hosted).
I would recommend you host your blog somewhere else - it's a waste of resources running it on Azure and you'll find much cheaper options. A recently introduced extra small instance would be a better choice in this case, but AFAIK it is charged separately as of now, e.g. even when you have an MSDN subscription those extra small instance hours do not count towards free Azure hours that come with the subscription.
There is no pre-pay option I know of and it's not possible to stop the app automatically. It'll be running until the deployment is deleted (beware! even if suspended/stopped the deployment will continue to accrue charges). I believe you will be sent a notification shortly before reaching your free hours threshold.
Be aware that when launching more than 1 instance you are charged for every hour of every instance combined. This can happen for example when you have more than one role in your Azure project (1 web role + 1 worker role - a separate instance will be started for each role).
Data trasfer means your entire data trasfer: blobs/Table storage/queues (transfers between your hosted service and storage account inside the same data center are free) + whatever data is transfered in/out of your hosted application, e.g. when somebody visits your pages. When you create storage accounts and hosted services in Azure you will specify a region that will be hosting your account/app - hosting in Asia is slightly more expensive than in Europe/U.S.
Your best bet would be to contact Microsoft with these questions.