Umbraco 7.4.1 Performance Issues - azure

I've got a site that was in 7.2.8 that I migrated to 7.4.1. There are quite a few host names I have to support given our development, test, staging and production environments.
I was hosting the site on Windows Azure but it was too slow for our client's liking. We moved it to their data center. Their production site runs great now, but we're looking at deploying updates and their staging site runs DOG slow. The code is nearly identical asside from some web.config updates (SSL cookies, X-Frame-Options host header, and a few other minor things).
What could be causing this problem? I've noticed that Umbraco seems to need some time to "do it's thing" before speeding up but I can't make heads or tails of it. Any recommendations would be greatly appreciated!
Thank you!

What is the staging site hosted on? Can you provide more details? It's possible the staging server is underpowered. If you have uSync turned on, and have a lot of DocTypes, that can slow up the site startup. Other things to look out for are any custom startup events, and where the DB is hosted. If it's an Azure DB, it'll be super slow, unless you're forking out for one of the higher tiers.

Related

Extremely high latency on Azure Web App

We currently self-host our website, but we've had a few downtime incidents outside of our control and we're looking at moving it into Azure. It's an ASP.NET website using Umbraco as the CMS.
Yesterday I signed up for an Azure trial, migrated a copy of our database onto an Azure SQL Server instance, spun up a new Web App and used Web Deploy to upload the app. This was my first experience with Azure, and I was pleasantly surprised at how easy it was. There were a few issues working out how to hook up my new app to my new database but overall it was a simple process.
But the performance is awful. The database is a Standard S2 and I initially created the web app on the Free tier. I was experiencing both poor download speed and latency. The first thing I tried was bumping up the Web App's scale, so I took it to Standard Medium. This seems to have fixed the download speed, but the latency is still impressively bad.
I'm using Google Chrome's network panel to test the speed. Here's what I get downloading an image from our server:
Obviously this is going to be fast as it's going over our local network, but this does at least show that the application is not the issue.
Here's what I get with Standard S2 hosted on Australia East:
The speed once the download has started is not too bad, but having a 41.92s TTFB is insane! It's not consistent, sometimes I get as low as 8s, but that's still unacceptable.
I don't have this issue when visiting other sites, so my internet is not the issue. I've tried using Small S2 and Large S2 with no change in results.
Am I doing something wrong? I find it difficult to believe that every Azure customer experiences this level of performance.
EDIT: Here's what we've learned in the comments so far:
Setting Always On does not help.
Using the Azure CDN is just as slow.
I also had enormous performance problems within the Azure environment. The cause was the activation of Applications Insights. After I deactivated it, the response times were again in the millisecond range and no longer 2-3 seconds.
This was an issue with my own network's configuration. I'm not sure how to resolve it, but I can't reproduce this issue when using my phone's internet so it's clearly not an Azure problem.

Is there any reason why a dev server should be accessible from the internet?

This is a very generic question that popped up in my mind. The reason has been that I came across a website dev server which leaked sensitive information about a database connection due an error. I was stunned at first and now I wonder why someone puts a development server out in the internet and make it accessible to everyone?
For me there is no reason for doing this.
But it certainly did not happen by accident that a company created a subdomain (dev.example.com) and pushed development code to it. So what could be the reason to ignore the fact of high security risk?
A quick search did not bring up any information about this. I'm interested in any further readings about this specific topic.
Thank you in advance
There is no reason for your dev servers to be accessible by the general public.
As a customer I just had an experience with a private chef site where I spent time interacting with their dev server because it managed to get crawled by Bing. Everything was the same as the live site but I got increasingly frustrated because paying a deposit failed to authorise. The customer support team had no idea I was on the wrong site either. The only difference was the URL. My e-mail address is now in their test system sending me spam every night when they do a test run.
Some options for you to consider, assuming you don't want to change the code on the page:
IP Whitelisting is the bare minimum
Have a separate login page that devs can use that redirects to the dev site with the correct auth token - bonus points for telling stray users that this is a test side and the live site is at https://.....
Use a robots.txt to make sure you don't get indexed
Hide it all behind a VNET - this really isn't an issue anymore with VPNs or services like Bastion.
Also consider the following so your devs/testers don't accidentally use the wrong site:
Have a dev css to make it obvious its a test system (this assumes you do visual testing later in your pipeline)
Use a banner to make it clear this is a dev site
Note that this would be a dev server. If you are using ringed/preview/progressive deployment then these should work just as well as the live site because they are the live site.
It's extremely common for a development environment or any "lower level" environment for that matter to be exposed to the pubic internet. Today, especially with more and more companies working in the public cloud and having remote team members, it's extremely more productive to have your development team or UAT done without having the need to set up a VPN connection or a faster more expensive direct connections to the cloud from your company's on premise network.
It's important to mention that exposing to the public internet does not mean that you shouldn't have some kind of HTTP Authentication in these environments that hides the details of your website. You can also use a firewall with an IP address whitelist. This is still very important so you don't expose your product and lose a possible competitive advantage. It's also important because lower level environments tend to be more error prone and important details about the inner workings of your application may accidentally show up.

Azure configuration for a university student

Hopefully my question is in the right forum here. I've just checked out the pricing model of windows azure and checked out the different configuration options:
http://www.windowsazure.com/de-de/pricing/calculator/
I have been working as a developer for almost two years now and worked a lot with IIS and the WPF technology. As a little private project I checked out HTML 5 and JS with MVC4 Web API and wondered what azure configuration I'd need to host a MVC 4 Web API project. Would it be rather a virtual machine or a full calculator? What benefits grants one over another?
I am going to start my studies soon, so I'd like the cheapest I can possibly get. I won't use it a lot (mainly for testing reasons), as well I think there won't be too much traffic either. Would a virtual machine also include the possibility of using IIS?
Could I also run a MVC project with something else than VM/full calculator?
And what would happen if for some reason my traffic just explodes? Would my services just be shut down until I increase the power of my machine? Or would I just get a huge bill and be surprised quite a lot?
Use websites.
You can start with 10 Web Sites absolutely free! So this is the cheapest. And it certainly supports MVC4 Web API.
For starter you can get a 3 month trial with enough credits to start. By default you'll have a spending limit on your account. This mean if you start to get too much traffic your services will shut down and you won't have to pay any extra. I think you can configure how much you are willing to pay but I never tried, it is still the default which is 0$.
You should start with Shared Web Sites and move to reserved instance, VM or web role later if you ever need to scale up or out.

Azure releasing complications

We are considering to build a webapplication and rely on Azure. The main idea behind this application is that users are able to work together on specific tasks in the cloud. I'd love to go for the concept of instant releasing where users are not bothered with downtime but I have no idea how I can achieve this (if possible at all). Lets say 10.000 users are currently working on this webapplication, and I release software with database updates.
What happens when I publish a new release of my software into Azure?
What will happen to the brilliant work in progress of my poor users?
Should I bring the site down first before I publish a new release?
Can I "just release" and let users enjoy the "new" world as soon as they request a new page?
I am surprised that I can't find any information about releasing strategies in Azure, am I looking in the wrong places?
Windows Azure is a great platform with many different features which can simplify lots of software management tasks. However, bear in mind that no matter how great platform you use, your application depends on proper system architecture and code quality - well written application will work perfectly fine; poorly written application will fail. So do not expect that Azure will solve all your issues (but it may help with many).
What happens when I publish a new release of my software into Azure?
Windows Azure Cloud Services has a concept of Production and Staging deployments. New code deployment goes to staging first. Then you can do a quick QA over there (sometimes "warm up" the application to make sure it has all caches populated - but that depends on application design) and perform "Swap" - your staging deployment becomes production and production deployment becomes staging. That gives you ability to perform "rollback" in case of any issues with the new code. Swap operation is relatively fast as it is mostly internal DNS switch.
What will happen to the brilliant work in progress of my poor users?
It is always good idea to perform code deployments during the lowest site load (night time). Sometimes it is not possible e.g. if your application is used by global organization. Then you should use "the lowest" activity time.
In order to protect users you could implement solutions such as "automatic draft save" which happens every X minutes. But if your application is designed to work with cloud systems, users should not see any functionality failure during new code release.
Should I bring the site down first before I publish a new release?
That depends on architecture of your application. If the application is very well designed then you should not need to do that. Windows Azure application I work with has new code release once a month and we never had to bring the site down since the beginning (for last two years).
I hope that will give you better understanding of Azure Cloud Services.
Yes you can.
I suggest you create one of the visual stdio template applications and take a look at the "staging" and "production" environments located directly when you click your azure site in portal manager.
Say for example the users work on the "production" environment which is connected to Sqlserver1. You publish your new release to "staging" which is also connected to Sqlserver1. Then you just switch the two using the swap and staging becomes the "production" environment.
I'm not sure what happens to their work if they have something stored in sessions or server caches. Guess they will be lost. But client side stuff will work seamlessly.
"Should I bring the site down first before I publish a new release?"
I would bring up a warning (if the users work conissts of session stuff and so forth) saying brief downtime in 5 minutes and then after the swith telling everyone it is over.

Removing a Web Front-End server from Farm (Load Balancing)

I am currently working on a project where we have developed a portal on SharePoint. Currently we have two servers which is using Load Balancing. We're experiencing a lot of difficulties connected to this, so we are thinking about removing one of the Web Front-End servers from the farm.
Could this cause any kind of problems that you can think of? I want to be sure before I recommend to this to our client. Anything you could think of would be great. Also pro's you can think of by doing this is appreciated.
The load balancing was agreed on from the beginning of the project, before we came in as consultants.
(I know this could be posted on SharePoint.Stackexchange aswell, but this could be general knowledge for anyone else as well.)
Since "two servers" is not a good idea anyway (you'd normally create at minimum a three server farm - two load balanced web front-ends and one indexing/job server), you can easily merge them into one server. Steps would be like this:
- enable all the services on the server which stays there
- remove the other server from "web front-end" role
- uninstall sharepoint from the other server
This might require recreation of your shared services provider if you are hosting some of the SSP things on the server you are removing.

Resources