Azure Web App runs very slowly - azure

We have an Azure web app for our production environment. The site is built with the Umbraco 7.2.6 CMS.
The web app Instance size is 'Large (4 Cores, 7GB memory)'
The database for this is a Standard SQL Azure S0 Level (10 DTUs).
When running this site on my local machine against the same SQL Azure database (exactly the same instance) the site is very fast.
However, on Azure the site runs painfully slowly. I cannot find any obvious reasons for this.
Does anybody have any suggestions for troubleshooting this issue?

I had exactly the same issue: Azure Web App + Azure DB = Slow DB returns.
But on the other hand if I ran the app locally on my computer and connected to the azure DB, everything would be flash quick.
So I checked my app service and location. I was using S1 located in the US with DB in Australia.
I upgraded to Premium S2
Located my App and DB is same region. Now it is >10 times faster.
I would suggest checking these two first before looking into anything else.

Change the database to S2, although it does not look taxed by the load of Umbraco it will make a big difference to performance.
Also the underlying storage of WebApps in Azure is quite slow and since Umbraco is local disc intensive then this is a factor if running a large site due to the Examine indexes.
There is a plugin replacing examine called "Azure Search for Umbraco" which will improve performance but may require a lot of rework dependant on site.

I did not expect this problem, it seems unfortunate that the way to solve this problem is to upgrade. I think I will try to utilize the In App MySql database instead.

Related

Confused about how Azure App Service Local Cache is useful, lacking any use cases

I've read all of the documentation about App Service Local Cache but I am struggling to see how it is useful. It claims to basically create a read-only copy of your Site directory, which for an MVC app is basically the whole app. But I can't find any information about use cases or why you'd want to do this.
I ask because it's been suggested that we move to implementing it, and I am trying to work out why we should do this.
I can see advantages if you do lots of reading/writing to disk but hardly any apps do that these days, we just use the database for everything, and logging goes directly to OMS.
Am I missing something major about this feature? To make my question non-vague, does this feature offer something useful for a simple MVC website that displays data from a database and writes back to the database?
Even if your app doesn't perform a large amount of I/O operations, you can still benefit from using App Service Local Cache due to:
Quicker app restarts (since files are local, latency to the shared network drive is removed). Helpful for app settings updates.
Less application downtime if your app loses connectivity with the shared network drive (which causes restarts), which can happen during Azure update/patch operations on the underlying VM
More are discussed in the Channel 9 video for Local Cache https://channel9.msdn.com/Shows/Cloud+Cover/Episode-201-Azure-Web-App-Local-Cache-with-Cory-Fowler

Slow publishing in Azure umbraco website

I have an umbraco website setup in Azure. The front-end website loads fine but the back end takes more than 15 seconds from when you hit the "Save and Publish" to show the check mark denoting success. I setup a test website but in Azure VM pointing to the same Azure sql database that hosts the same umbraco Azure website and I don't get this problem.
I just spent some time debugging this same scenario. We had a situation where once someone saved a node, it would take 30 seconds before the UI would become responsive again. Network trace confirmed that we were waiting on API calls back from Umbraco.
We were on an S0 SQL instance, so I bumped it to S1 and the perf got worse!? (guessing indexes rebuilding?).
We already have a few Azure specific web.config options set (like useTempStorage="Sync" in our ExamineSettings.config). Ended up adding the line below and now our saves went from 30-35s to 1-2s!
<add key="umbracoContentXMLUseLocalTemp" value="true" />
This is from the load balancing guide available here - https://our.umbraco.org/documentation/Getting-Started/Setup/Server-Setup/load-balancing/flexible
Umbraco Save and publish is fairly DB intensive. An S1 instance is possibly not beefy enough. We use S2 for our dev sites, and that can take up to 5 seconds to save and publish depending on whether anything else is hitting the database.
You may also have issues with other code running that's slowing things down. Things like Examine indexing can be quite slow on Azure. It could also be one a plugin that slowing things down.
How complex are your DocTypes? Also, which version of Umbraco are you running? Some older versions have bugs in that cause performance issues on Azure.

Performance of Web Database against new standard service tier

I have sql azure database. Currently I'm using the "Web" SQL database since my DB was small ie about 300mb and the maximum size is 5GB. I came to know that the Web service tiers will be retired in September 2015 i have restored my my Live DB as a "Standard" s0 which has a maximum size of 2 GB. But what i noticed is the performance with the new standard type database is poor when compared to the retired web edition. Say for instance it used to take like 40 seconds to delete 60 thousand records in the Web edition and it is now taking two minutes to 3 minutes with the new standard type. Have any one experienced this kind of thing or its just me ?
Please give me your suggestions
I had a similar issue; I migrated sql 2008 to Azure web; got a performance hit; then switched from web to S0; got another hit. I think im now at s1
I figured it was probably missing indexes; but with the ability to Trace + tune gone with azure, I had to do things a bit more manually.
First, look at this, http://msdn.microsoft.com/en-us/library/azure/ff394114.aspx you want to be able to get to the part where you can get the long running queries.
Then, with each long running query; you will want to execute the execution plan. To view a query’s execution plan, we need to explicitly include it before executing the query. Right-Click the query window and select Include Actual Execution Plan.
If this does not help you then you need to do more work; what you will want to do is export the database (it comes out as a bacpac file) to sql 2012 (Right click on the Connection > Databases node and select "Import Data-tier application...") on a local server somwhere (I used an Azure VM); then hookup an application/website to this, enable query analyzer., and tune it the old way., this will reveal all the non-clustered indexes that magically disappeared... once you add those to your sql azure db, you will get performance back.
Sure you could just increase your standard tier., but this can get expensive., its better to tune and find out where things went wrong...

Windows Azure: node.js + mongodb setup cheaply

I am building a chrome extension that needs a backend to store user information. So far, we have been using Firebase, a real-time solution that we can access from the front end. Yet we will need to store approximately 200GB of data so Firebase seems less viable for a startup budget.
We were looking into transitioning to hosting a node.js app in the cloud and then communicating with a mongodb database. We were looking into Azure for this purpose. Yet it seems the only way to do this is getting MongoLab, which is still really expensive. Isn't there a way to store a lot data in mongodb without incurring in huge costs? For some reason, the SQL databases look way cheaper, which does not make much sense to me.
some links for reference.
SQL pricing:
http://azure.microsoft.com/en-us/pricing/details/sql-database/
mongodb pricing:
https://mongolab.com/plans/pricing/
Sure you can get Mongo running in Azure. You would simply fire up a new Linux VM and install Mongo and you're off!
http://azure.microsoft.com/en-us/pricing/details/virtual-machines/#Linux
Your question hints that the biggest priority is disk space. Storage in Azure is pretty cheap. Let's imagine you get an A2 instance with 60 GB of space and you run out of space. You can easily attach new disks in Azure and the storage is really cheap.
The classic way to scale Mongo however is to use a replica set, in which case you'll need to pay for more nodes/machines as you add them.

Windows Azure reliability (my server just lost its drives & sites, then 20 minutes later they reappeared)

I am on a Windows Azure trial to evaluate migrating a number of commercial ASP.NET sites to Azure from dedicated hosting. All was going OK ... until just now!
Some background - the sites are set up under Web Roles (i.e. as opposed to Web Sites) using SQL Azure and SQL Reporting. The site content was under the X: drive (there was also a B: drive that seemed to be mapped to the same location). There are several days left of the trial.
Without any apparent warning my test sites suddenly stopped working. Examining the server (through RDP) I saw that the B: and X: drives had disappeared (just C: D & E: I think were left), and in IIS the application pools and Sites had disappeared. In the Portal however, nothing seemed to have changed - the same services & config seemed to be there.
Then about 20 minutes later the missing drives, app pools and sites reappeared and my test sites started working again! However, the B: drive was gone and now there was an F: drive (showing the same as X:); also the MS ReportViewer 2008 control that I had installed earlier in the day was gone. It is almost as if the server had been replaced with another (but the IIS config was restored from the original).
As you can imagine, this makes me worried! If this is something that could happen in production there is no way I would consider hosting commercial sites for clients on Azure (unless there is some redundancy system available to keep a site up when such a failure occurs).
Can anyone explain what may have happened, if this is possible/predictable under a live subscription, and if so how to work around it?
One other thing to keep in mind is that an Azure Web Role is not persistent. I'm not sure how you installed the MS Report Viewer 2008 control but anything you add or install outside of a deployment package when you push your solution to Azure is not guaranteed to be available at some future point.
I admit that I don't fully understand the full picture when it comes to the overall architecture of Azure but I do know that Web Roles can and do re-create themselves from time to time. When the role recycles, it returns to the state as it was when it was installed. This is why Microsoft suggests using at least 2 instances of your role because while one or the other may recycle they will never recycle both at the same time, part of what guarantees the 99.9% uptime.
You might also want to consider an Azure VM. They are persistent but require you to maintain the server in terms of updates and software much in the way I suspect you are already doing with your dedicated hosting.
I've been hosting my solution in a large (4 core) web role, also using SQL Azure, for about two years and have had great success with it. I have roughly 3,000 users and rarely see the utilization of my web role go over 2% (meaning I've got a lot of room to grow). Overall it is a great hosting solution in my opinion.
According to the Azure SLA Microsoft guarantees up time of 99.9% or higher on all its products per billing month. (20 min on the month would be .0004% loss, not being critical, just suggesting that they are still within their SLA)
Current status shows that sql databases were having issues in the US north last night, but all services appear to be up currently
Personally, I have seen the dashboard go down, and report very weird problems, but the services that I programmed to worked just fine all the way through it. When I experienced this problem it was reported on the Azure Status, the platform status and the twitter feed
While I have seen bumps, they are few and far between, and I find reliability to be perceptibly higher than other providers that I have worked with.
As for workarounds I would suggest a standard mode for your websites and increasing instances of the site. You might try looking into the new add ins that are available with the latest Azure release. Active Cloud Monitoring by Metrichub might be what you require.
It sounds like you're expecting the web role to act as a Virtual Machine instance.
Web Roles aren't persistent (the machine can be destroyed and recreated at any time), so you should do any additional required set up as a 'startup task' in your Azure project (never install software manually).
Because of this you need at least 2 instances so that rolling upgrades (i.e. Windows security patches, hotfixes and so on) can be performed automatically without having your entire deployment taken offline.
If this doesn't suit your use case then you should look at Azure Virtual Machines, but you'll need to manage updates and so on yourself. It's usually better to use Web Roles properly as you can then do scaling and so on a lot more easily.

Resources