I am a bit confused about deployment of cloud services, and particular whether just code can be replaced - azure

Just getting used to VS2012 publishing of Cloud Services. At present I have a one instance webrole which contains a MVC3 application. I can publish it to Azure without issue, and it creates the Cloud Service>Web Role>VMs. Fine. Takes a little while.
However when I do a little code change how can I migrate just this code change without replacing all the VMs that implement the WebRole etc.
It seems that Code and infrastructure are inseparable, or have I misunderstood. Is there a way to just update the code bit?
Thanks.

When you roll out an update, you upload an entire package containing not only your code files, but also the configuration for the VM, such as # of instances, ports to open on the firewall, local resources to allocate, etc. These configuration settings are part of the code package - so there is more going on than just updating code files.
However, there are a couple of methods you can use to have more granular control over updates.
Use Web Deploy. One thing to keep in mind, is that any automatic service updates will restore your website to the last fully-deployed package, which may not be as up-to-date. You would only want to use this in staging, then do a full package update for production rollout.
Use an Azure Web Site instead, which allows continuous integration with your source control provider, and direct updates to the code.
Use an Iaas VM instead. These are basically the same as running your own custom server in the Azure cloud, and you have full control over the OS. However, you also have full responsibility for keeping the OS updated and secure.
You can also enable RDP to your Azure Web Role VM's. You will find all your code files there and IIS, but I wouldn't recommend updating your code this way for the same reasons listed in #1.

The code and infrastructure, in a cloud service, are actually separate. All you upload is a deployment package containing just your code and supporting libraries / files. You don't upload a vhd. Azure provides that for you, spinning up a vhd, and then accessing your code on a file folder on that vhd. Same process happens each time you scale out to more instances.
when you make a code change, you build a new deployment package and deploy that. If you do it as an in-place update (vs delete+redeploy), each role is updated on each instance (when you have multiple instances of a role, they're not all updated at the same time). You can even specify that you only want a single role within the deployment to be updated (helpful if, say, you have a worker role in addition to your web role, and want to leave all the worker role instances running).
when the code update happens, the VMs aren't replaced, but they are recycled, and when they start back up, they are running the updated code.

You can use WebDeploy with Cloud Services in production across multiple servers using the AzureWebFarm project (disclaimer: I maintain it).
Alternatively, you can also use the excellent Octopus Deploy deployment technology in conjunction with the AzureWebFarm.OctopusDeploy project (disclaimer: I maintain this one too).
To be honest though, if you just have a simple web app then I wouldn't both with cloud services - I'd just use Web Sites. Feel free to check out my blog post to see the situations which might force you to use cloud services though.

If you enable WebDeploy on the cloud service, you can use web deploy to publish the MVC application.
See http://msdn.microsoft.com/en-us/library/windowsazure/ff683672.aspx for details.

All of the above answers are correct and if you are trying to change your code for a production service you definitely want to do an in place upgrade as described. However, frequently during the dev/test phase or troubleshooting I want to make one small change and test it out quickly. To do this check out http://blogs.msdn.com/b/kwill/archive/2013/09/05/how-to-modify-a-running-azure-service.aspx which describes how to modify the code via RDP to the Azure VM.

Related

Different environments in Azure shared app service plan

Recently I've started experimenting and getting familiar with some of the Azure offerings. I made a simple app, connected it with azure functions and azure storage as well as some other offerings like service bus for example.
So far so good, the app is working great and I got my feet wet with some great Azure services.
But now I'm unsure on how best to proceed because what I have so far is a development version of my app. If I wanted to make a prod version do I have to provision a different set of all the azure resources used for the dev version?
So basically, I would have mydevsite.azurewebsites.net and myprodsite.azurewebsites.net. Is this correct? I can restrict mydevsite.azurewebsites.net with some IP address restrictions so that is not publicly available but I still feel this is a hacky way of doing this and that there should be a better way.
Is there a common approach to a scenario like this?
This is a bit of a broad question, but I can tell how I have done it before.
A common setup would be three environments, Dev, Test and Production.
Dev mostly runs on the developer's machine (as much as it can). We use a local IIS installation to run the web app, and a local SQL Server as a database. Azure Storage and Cosmos DB can also be emulated locally. Certain services like Search for example can't be run locally so you would have to run those in Azure anyway.
Test and Production are basically two identical resource groups with the same resources, just configured slightly differently. So double the App Service Plans, SQL databases etc.
Depending on how you want to do it though, you can of course share resources across environments. But it is a good idea to somehow make sure they do not accidentally use the other environment's stuff. And the definite bad side of this is that you are putting production data in the same place as test data, which frankly should not be together.
I know some organizations run a Dev environment fully in Azure. There can be a couple reasons for this: very heavy environment which can't really run on dev machines, or they want to test ARM template deployment at dev stage too.
Having duplicated services allows you to use ARM templates for automatically deploying and updating the infrastructure, which is pretty nice.
If you are on Standard or higher, you might think to use Deployment Slots in App Service for different environments, but they are really not meant for that purpose. We use them to reduce application downtime when deploying a new version, and as a fallback if the update turns out bad. So the deployment goes to a "staging" deployment slot, which gets swapped with the other one, and the new version is live. We then stop the deployment slot so we are not running the older version in the background unnecessarily.
But otherwise we have a separate App Service Plan with separate Web Apps with their own staging slots.
Deployment slots documentation: https://learn.microsoft.com/en-us/azure/app-service/web-sites-staged-publishing

Setup and Deployment of a Multi-Part Web Application in Azure

Note: I'm relatively inexperienced in Azure. So sorry if I'm missing some obvious points or choices.
I need to evaluate how to set up and deploy (with frequent updates) three .NET web applications which are tightly coupled.
Here's a simplified diagram of the system’s parts. There are three web applications:
Backend
Frontend
API
The web applications are separated application but they share the same database and internally use the same domain logic and database libraries.
So, when there's a software update which sometimes automatically upgrades the database’s schema, it must be deployed at the same moment to all three web apps (within few seconds).
What's the best way to accomplish this on the Azure platform (App Services)?
Bonus Question
Currently, I’m running dozens of these systems (each consists of three applications). Is there a way to deploy the runtime files to a “Azure-local” place and distribute them afterwards first to group 1, then to group two etc.?
Have you looked at deployment slots? You can automate the swapping of the 3 App services with Powershell or the Azure CLI and swap all 3 at the same time.
You can also automate the deployment of all systems from a repo to a staging slot and automate the swap with Powershell or the Azure CLI, one system at a time.
CSharpRocks's answer is great for Web Apps. For the database portion, I've got two suggestions:
1. Database Project/DACPAC
Deploy by publishing a Database Project. Under the covers, this deploys a DACPAC to your target database. This can be done via publishing from Visual Studio, publishing via a VSTS deployment task, and I believe it can also be done through Powershell. I have personally only read about deploying DACPACs via Powershell.
Given the scenario you describe, it sounds like you'd want to deploy your code changes to a staging slot for all Web Apps. Then you can deploy your Database Project, then use Powershell to slot swap.
The advantage of a Database Project is that you're guaranteed for your target database to match your Database Project's state. The downside is that if things go sideways, there's not a super-easy downgrade path.
2. EF Code First Migrations
A Code First Migration will allow you to specify upgrade and downgrade scripts for your database. I've only executed these through the package manager console, but they can also be executed from a standalone executable, allowing you to script their execution.
The benefits of Code First Migrations are that you can execute a scripted downgrade. This, coupled with a Web App slot swap going back to your previous application version gives you a little more assurance that you can roll back if things go south on you.
The other feature to point out is that EF Code First executes the changes that you script, and nothing more. Therefore the following interleaving of events is possible:
Deploy a code first migration.
Manually make a schema change in your DB
Deploy a second code first migration.
EF Code First Migrations don't care at all about step #2 and will leave your manual changes. This is not the case with deploying a database project. The database project will ensure that your target database matches the Database Project definition.
There are some practical considerations at work here. If you're using a SQL Azure DB and are using automatic performance tuning, Azure will roll out/delete indexes as it sees fit. If you use a Database Project and do NOT include those automatic index updates, deploying a Database Project will roll those index changes back!

patching website on azure webroles

Sometimes in our website which is deployed on Azure web roles, issue comes related to small bugs in javascript and HTML. We go to all instances of webroles and fix these JS and HTML file on machines.
But I was looking into some automated way of doing this, downloading the files to patch from some central location and replace the files in all azure web roles. I am using ASP .net MVC for website.
It is possible to redeploy the website with the patch in the package but we don't want to wait for long deployment time. Please let me know if it is possible via some internal WEB API which replaces the content on all azure web roles.
There are 2 ways to deploy a new webrole:
redeploy
inplace update
The first one is the slowest, meaning new VM's are booted.
With inplace upgrade (https://azure.microsoft.com/en-us/documentation/articles/cloud-services-update-azure-service/)
The new application package is mounted on a new drive (usually F: instead of E:) and the IIS website is swapped to the new drive.
You can try this by going to the old portal and upload a new application package. In just a few seconds/minutes the update is done.
After digging many things on stackoverflow, I crafted my own solution which is creating a topic and subscribing to the topic in code when website starts. When I want to patch the web app then I send a message to Topic to start patching then each machine in the web roles will get notification from topic and start patching themselves. Patching itself is very easy, which is going to a web storage and downloading files from there and replacing files in approot.
When azure maintenance happens this patching may go away, so for this situation I made patching work started at start up of website too.
Cloud service deployment packages tend to be slow since they are basically a recipe on how to build and configure your deployment. The deployment not only puts the recipe out in Azure (so it can be used again if it needs to move your machine), but also follows the recipe to build out a VM for your Cloud Service (WebRoles/WorkerRoles are platform as a service so you don't have to worry about the OS and infrastructure level like you would if you were using the Virtual Machine Azure product but they do still run in VMs on physical hardware).
What you are looking to do is something that will update the recipe (your cloud service package) and your deployment after it is out and running already ... there is no simple way to do that in Cloud Services.
However, yes you could create a startup script that could pull the site files from blob storage or some other centralized location - this would compare to how applications (fiddler for example) look for updates then know how to update and replace themselves. For that sort of feature you will likely need to run code as an elevated user - one nice thing about startup scripts are they can run as an elevated user - so they can do about anything you need done on a machine (but will require you to restart the instance for them to run). Basically you would need to write some code that will allow your site to update itself. This link may help: https://azure.microsoft.com/en-us/documentation/articles/cloud-services-startup-tasks/
If you have the ability to migrate to WebApps and WebJobs, I would recommend looking into that since that compute product solves your problem really well.
Here is a useful answer of the differences between WebApps and Cloud Services: What is the difference between an Azure Web Site and an Azure Web Role

Automated deployment to Azure

I am investigating ways to automate deployment of a specific build of a product to a specific Azure Cloud Service or VM.
The following steps would be automated, with as little manual intervention as possible:
Create a Cloud Service or VM
Install a specific build of the product (as a standalone exe or
Windows service, not IIS)
Tweak the configuration files(s)
Set up user account(s)
Run the exe/service
The code is currently in Visual Studio Online / TFS. We have Cruise Control .NET CI set up and we are looking at moving to TeamCity.
This will be used for the usual QA & Production type environments, but also for ad-hoc deployment e.g. if a trial feature has been added to the product and we want to deploy that to a new VM for a specific customer to play around with. Ideally we would be able to use the command line or a UI to pick the build, create the VM and specify any configuration changes.
One possible solution might be Octopus Deploy although I don't think this would be able to actually create an Azure VM. I will probably also look at the Azure API, and also TFS deploy.
Basically is this feasible, and are there any proven alternatives that I'm missing, in order to narrow down my research?
Thanks in advance!
While Octopus Deploy can do many things, in this particular scenario of yours, you're asking it to do three types of work - release management, automated provisioning and configuration management. It's a fine line between automation awesomeness and a really sticky situation.
Of the tasks you're asking, almost all of them can be done within Octopus today. I'd argue that it may be possible to Create a cloud service or VM. If there's some PowerShell cmdlet/library that allows you to spin up VMs with authentication, odds are you can do it Octopus - but it may not be the right tool to do that job today. Why?
In my opinion, it distorts the barrier between Developers, DevOps and SysAdmins. Whether you use Chef, Puppet, Salt, etc. whatever configuration management you have, that needs a whole layer of users with the expertise to back it up - often said expertise of system which the very developers who want such flexibility may not have. Secondly, right now this isn't a focus within Octopus (yet). I'd be hard pressed to say whether to use a tool such as Octopus on what it can do vs what it should do or not.
It's really nice that Azure now has support for preinstalling the Octopus tentacle for VMs. But that requires additional info such as, the Server thumbprint, port other supplementary configuration info in order to automate vm provisioning. That configuration management - should it be under Octopus's control, or something like Chef or Puppet? I honestly don't have an answer to this but my feeling as of now is not Octopus. Someday, perhaps, but until this is really ready and fully tested and vetted, I'd wait it out (a little) at least with Octopus.
If you're the adventurous type, then by all means try out Octopus. I may do a PoC (proof of concept) of this infrastructure automation later this year, but to rely on it today for business/production usage as the primary means of infrastructure automation will be risky and require a lot of work and experimentation. Again, I'm not saying it cannot be done, I'm questioning whether it should be done within Octopus as of this response today.
If anything, from the Octopus Deploy side of things is this feasible? Yes - it just hasn't quite been worked out yet. Looking at what you want to do, I'd say it's a two-phase process: 1. spinning up the new VM, attaching the tentacle to the environment and 2. running the deployment process on that new VM.
I'd also recommend checking out the Octopus blog. They're publicly talking about infrastructure automation. You can read about it here: http://octopusdeploy.com/blog/rfc-cloud-and-infrastructure-automation-support
I hope this response helps in some way.
The solution to the automated deployment in Azure is use ElasticBox.
I will skip the details of all the configuration options for Azure supported by ElasticBox, as they are detailed in the documentation section: http://elasticbox.com/documentation/deploying-and-managing-instances/using-azure/.
You only need to create a box (abstraction unit that ElasticBox uses to define the installation and configuration of the deployment of a service or application in any cloud) that takes care of the steps you need to be automated. So finally you will deploy the vm with near no manual intervention, just one click or a command with some parameters.
A box includes the variables necessary for your deployment and your scripts (In this case probably PowerShell, but they could be bash, python, perl, java, etc.)
When you deploy the box you create to deploy your application, ElasticBox will:
Create a Cloud Service or VM. (ElasticBox takes care of provision the vm in your Azure provider, or any of your preferred cloud provider).
Install a specific build of the product (as a standalone exe or Windows service, not IIS) -> This should be your install event script.
Tweak the configuration files(s) -> This should be part of your configure event script.
Set up user account(s) -> This should be part of your configure event script.
Run the exe/service -> This should be part of your start event script.
ElasticBox has a command line tool that enables to do VM deployments of your boxes and also you can manage your deployed vms with it: https://pypi.python.org/pypi/ebcli
It also support automatic termination of the vm after a custom time value.
This is quite a broad question, but certainly the goal is achieveable via one of a number of methods. While a bit old, Tom Hollander's blog on automated deployments is a good starting place. I've seen a lot of OctopusDeploy used as well as TeamCity but they all ultimately rely on Azure's PowerShell Cmdlets, Management Libraries in custom code or pure REST API calls.
Just an FYI; One option is to do everything by using the Azure Management API. I also like to reference the Azure Client Libraries in a VS project and do everything is C# code.

Running batch process in Windows Azure Websites?

Deploying apps to Windows Azure Websites feels incredibly more convenient compared to the initial WebRole option. Being able to push through Git, and get the app restarted in ~20s is a massive improvement over the 15min role redeploy.
Thus, I am considering using this option for what used to be hosted in WorkerRole as well. Indeed, it's possible to allocate a full VM to run WA website.
Are there any gotcha to be aware of when attempting this? Obviously, as the name suggests, WA websites are not intended for backoffice processing.
In an upcoming feature for Windows Azure Websites, the scenario you're referring to will be supported:
http://github.com/projectkudu/kudu/wiki/Web-jobs
The following will allow you to have 2 types of processes to run aside your website:
Triggered - Start your process on a scheduled (or manual) basis.
Continuous - Your process will always be on (if it goes, it brings it back up).
Regarding the differences between Azure Webrole and Azure Website, there's a different question:
What is the difference between an Azure Web Site and an Azure Web Role
Cloud Services gives you two different environments: staging and production. You can also use Continous deployment with Git, Tfs, Codeplex, Dropbox too. But if you don't need this two environments, you can go with websites.
Using a Virtual Machine, you'll be responsible for the operating system, runtime, data and also your app (obviously). Just be aware that you'll have to apply the service packs / security packs by yourself. If your app doesn't use 3rd party components I don't see a reason to use a VM for that.

Resources