Azure releasing complications - azure

We are considering to build a webapplication and rely on Azure. The main idea behind this application is that users are able to work together on specific tasks in the cloud. I'd love to go for the concept of instant releasing where users are not bothered with downtime but I have no idea how I can achieve this (if possible at all). Lets say 10.000 users are currently working on this webapplication, and I release software with database updates.
What happens when I publish a new release of my software into Azure?
What will happen to the brilliant work in progress of my poor users?
Should I bring the site down first before I publish a new release?
Can I "just release" and let users enjoy the "new" world as soon as they request a new page?
I am surprised that I can't find any information about releasing strategies in Azure, am I looking in the wrong places?

Windows Azure is a great platform with many different features which can simplify lots of software management tasks. However, bear in mind that no matter how great platform you use, your application depends on proper system architecture and code quality - well written application will work perfectly fine; poorly written application will fail. So do not expect that Azure will solve all your issues (but it may help with many).
What happens when I publish a new release of my software into Azure?
Windows Azure Cloud Services has a concept of Production and Staging deployments. New code deployment goes to staging first. Then you can do a quick QA over there (sometimes "warm up" the application to make sure it has all caches populated - but that depends on application design) and perform "Swap" - your staging deployment becomes production and production deployment becomes staging. That gives you ability to perform "rollback" in case of any issues with the new code. Swap operation is relatively fast as it is mostly internal DNS switch.
What will happen to the brilliant work in progress of my poor users?
It is always good idea to perform code deployments during the lowest site load (night time). Sometimes it is not possible e.g. if your application is used by global organization. Then you should use "the lowest" activity time.
In order to protect users you could implement solutions such as "automatic draft save" which happens every X minutes. But if your application is designed to work with cloud systems, users should not see any functionality failure during new code release.
Should I bring the site down first before I publish a new release?
That depends on architecture of your application. If the application is very well designed then you should not need to do that. Windows Azure application I work with has new code release once a month and we never had to bring the site down since the beginning (for last two years).
I hope that will give you better understanding of Azure Cloud Services.

Yes you can.
I suggest you create one of the visual stdio template applications and take a look at the "staging" and "production" environments located directly when you click your azure site in portal manager.
Say for example the users work on the "production" environment which is connected to Sqlserver1. You publish your new release to "staging" which is also connected to Sqlserver1. Then you just switch the two using the swap and staging becomes the "production" environment.
I'm not sure what happens to their work if they have something stored in sessions or server caches. Guess they will be lost. But client side stuff will work seamlessly.
"Should I bring the site down first before I publish a new release?"
I would bring up a warning (if the users work conissts of session stuff and so forth) saying brief downtime in 5 minutes and then after the swith telling everyone it is over.

Related

google analytics a/b testing with 2 site instances.

I am getting ready to release a new web site in the coming weeks, and would like the ability to run multivariate or a/b tests between two version of the site.
The site is hosted on azure, and I am using the Service Gateway to split traffic between the instances of the site, both of which are deployed from Visual Studio Online. One from the main branch and the other from an "experimental" branch.
Can I configure Google analytics to assist me in tracking the success of my tests. From what I have read Google analytics seems to focus on multiple versions of a page within the same site for running its experiments.
I have though of perhaps using 2 separate tracking codes, but my customers are not overly technically savvy, so I would like to keep things as simple as possible. I have also considered collecting my own metrics inside the application, but I would prefer to use an existing tool as I don't really have the time to implement something like that.
can this be done? are there better options? is there a good nugget package that might fulfil my needs? any advice welcome.
I'd suggest setting a custom dimension that tells you which version of the site the user is on. Then in the reports you can segment and compare the data.

Mitigating the risks of auto-deployment

Deployment
I currently work for a company that deploys through github. However, we have to log in to all 3 servers to update them manually with a shell script. When talking to the CTO he made it very clear that auto-deployment is like voodoo to him. Which is understandable. We have developers in 4 different countries working remotely. If someone where to accidentally push to the wrong branch we could experience downtime, and with our service we cannot be down for more than 10 minutes. And with all of our developers in different timezones, our CTO wouldn't know till the next morning and we'd have trouble meeting with the developers who had the issue because of vast time differences.
My Problem: Why I want auto-deploy
While working on my personal project I decided that it may be in my best interest to use auto-deployment, but still my project is mission critical and I'd like to mitigate downtime and human error as much as possible. The problem with manual deployment is that I simply cannot manually deploy on up to 20 servers via SSH in a reasonable amount of time. The problem perpetuates when I consider auto-scaling. I'd need to spin up a new server from an image and deploy to it.
My Stack
My service is developed on the Node.js Express framework. These environments are very rich in deployment and bootstraping utilities. My project uses npm's package.json to uglify my scripts on deploy, and also runs my service as a daemon using forever-monitor. I'm also considering grunt.js to further bootstrap my environments for both production and testing environments.
Deployment Methods
I've considered so far:
Auto-deploy with git, using webhooks
Deploying manually with git via shell
Deploying with npm via shell
Docker
I'm not well versed in technologies like Docker, but I'm interested and I'd definitely give points to whoever gave me a good description as to why I should or shouldn't use Docker, because I'm very interested in its use. Other methods are welcome.
My Problem: Why I fear auto-deploy
In a mission critical environment downtime can put your business on hold, and to make matters worse there's a fleet of end users hitting the refresh button. If someone pushes something that's not build passing to the production branch and that's auto-deployed, then I'm looking at a very messy situation.
I love the elegance of auto-deployment, but the risks make me skeptical. I'm very much in favor of making myself as productive as possible. So I'm looking for a way to deploy to many servers with ease, and in very efficient manner.
The Answer I'm Looking For
Explain to me how to mitigate the risks of auto-deployment, or explain to me an alternative which is better suited to my project. Feel free to ask for any missing details in the comments.
No simple answer here. I offer a set of slides published by Mike Brittain from Etsy, a company that practices continuous deployment:
http://www.slideshare.net/mikebrittain/mbrittain-continuous-deploymentalm3public
Selected highlights:
Deploy frequently and in small batches
Use config/feature flags to control system behaviour and "dark release" major features
Code review all changes to the production branch
Invest in monitoring and improve the feedback loop
Manage "services" separately to the "application" and be mindful of run-time version and backwardly compatible changes.
Hope this helps

How does one use a different LightSwitch database connection string per user and environment?

I develop software in a multi-programmer source controlled world (TFS). Moreover, I have multiple environments (i.e. individual desktop developer machines, QA and production in the Azure cloud). The problem is, I'm unsure as to the best way to use different database connection strings for each environment. I realize that I can manually update the web.config for the Server project but that causes a clash when check-in occurs. This is a pretty critical issue for our team, so any help in this regard would be greatly appreciated. Thanks....

Manage multiple test environments

I need some help trying to make my lab tests better!
Where I work every team has one server to test new features and bug fixes.
Then, each server is divided in slots. When a new feature is done, the Programmer needs to check if there is some available slot to deploy the software. If the is no slot available, the Programmer needs to create another slot!
My question is: is there any software that helps the programmers to manage the available environments? That tells me wich slot is available and with feature is benigno tested in each slot?
Thanks!

Microsoft CRM 2011 On Demand Development and Test Environments

Does anyone have recommendations on the best way to set up development and testing environments for Microsoft CRM 2011 On Demand?
The recommendations I have seen so far include:
Paying for another account with only one user
Creating a VM
Going with a partner hosted environment
You will need to be a little more specific. What's wrong with the 3 you have listed? Is it cost, it is the time to configure?
That being said, what I do is sign up for the free 30 day trial.
First, sign up for a new Windows Live account.
Second, click here to sign up for the 30 day account.
Third, I always write down the login & url because I always forget them.
I'll have anywhere from 1 - 5 of these running at once.
The main benefit is the control this gives me. Since you can't access the SQL server directly with On Demand, it forces you to make your configurations & customizations the correct way.
Your other option is to setup a VM environment and create a new instance every time you need a clean setup. This is not my preferred option since you need good hardware to run the environment (otherwise the performance penalty is huge)

Resources