I have a cloud service project with deployment settings
"Delete deplyment on failure" - unchecked
"Deployment update" - checked
"If deployment can't be updated, do a full deployment" - unchecked
When I deploy a new version it seems the virtual machine is always intact and it just creates a new disk with my code which it attaches as either E: or F:.
Will a deployment ever create a new vm or will it always use the existing vm. Will new VMs only be created when the VM template is updated?
From our experience, Cloud Services Updates will always use the current deployed instances, they won't create new ones, unless you are modifying the amount or size of instances: Taken from here
You might want to consider having a Staging / Production pair of environments and doing a Swap.
When we used Cloud Services, our Staging environment was always empty, we deployed there, then swapped and then deleted the Staging (former Production) deployment.
Related
I've got few Cloud Services that have both Production and Staging slots. Since I hadn't deploy the environment and I'm not aware of what exactly stands behind them - can I delete the Staging slots to lower the cost because they are billed the same as Production slots? And if I download the config files, would I be able eventually to import them back in Azure?
You can directly deploy to the production slot of a Azure Cloud Service. If you have more than one role instance (you are running multiple role instances to get the stated SLA right?), Azure will automatically upgrade each role instance independently of one another.
While this saves you a little bit of money by not deploying to the staging slot, we found the staging slot deployment to work with our continuous integration strategy better.
Reference : How to Manage Cloud Services
Can I delete the Staging slots to lower the cost because they are
billed the same as Production slots?
You should be able to delete staging slot without impacting the production slot. However if you have some users that are connecting to the staging slot, they will no longer be able to connect to the application once you delete the deployment from staging slot.
And if I download the config files, would I be able eventually to
import them back in Azure?
Merely download the config file is not going to help as you would also need the package file. What you should do instead is invoke Get Deployment Service Management API REST operation. What it will do is copy both the config file and package file in a storage account of your choice.
When I create a VM in Azure, it is creating an accompanying Cloud Service and Network Resource. I found that the Cloud Service is there as a deployment layer. I have not found why the Network Interface is there.
Since this particular circumstance is not going to have a deployment associated with it as it is used as an Elasticsearch server, I technically will not be needing the Cloud Service. However, when I delete the service, it takes the VM with it even though I do not expressly select it for delete.
My two specific questions:
1st - Why is there a Cloud Service created and not able to be deleted without repercussions when there is not deployment necessary?
2nd - Why is the Network Interface created and not able to be deleted without repercussions?
Both questions are with the understanding that this is an Elasticsearch VM.
A cloud service is a required artefact of an ASM/classic deployment if a VM. It is not needed in an Azure Resource Manager deployment, which is what you should use for new deployments. However, the two types of deployment are orthogonal, so you may need to keep using ASM if you already have VMs deployed that way. If so, you should consider migrating them to ARM.
I am having an issue deploying to the Staging environment of my Windows Azure Cloud Service.
This is something I do frequently without issue before doing a swap to Production (once I have validated everything is OK in Staging). Today for some reason I am getting this error when trying to deploy:
Allocation failed; unable to satisfy constraints in request. The requested new service deployment is bound to an Affinity Group, or it targets a Virtual Network, or there is an existing deployment under this hosted service. Any of these conditions constrains the new deployment to specific Azure resources. Please retry later or try reducing the VM size or number of role instances. Alternatively, if possible, remove the aforementioned constraints or try deploying to a different region. The long running operation tracking ID was: da5cc14aaba6228683cb4e8888b835e1.
Seeing as my deployment package has not changed since the last time I successfully performed an update of my Staging environment (apart from one line of code for a bug fix) I can't see this being an issue with my package. I am hoping this is transient Azure environment issue - anyone any ideas as to what this may be?
There is a fragmentation issue in the cluster you are trying to deploy to. The ops team is engaged and working to resolve and you should be able to deploy again later tonight or tomorrow.
Some additional information:
Once you create a deployment (either prod or staging slot) in a cloud service your entire cloud service (both prod and staging slots) are pinned to a cluster of machines (there are some Mark Russinovich fabric videos with more details if you are interested). So if there is a problem in a cluster, or you try to deploy a VM size not available in the cluster such as the new D series machines, then you may fail if the specific cluster can't allocate the request. To resolve this you can deploy to a brand new cloud service which will allow the fabric to check all clusters in that datacenter/region to satisfy the allocation request.
Consider a different upgrade strategy for scenarios like this. A lot of services will upgrade by creating a new deployment in a new cloud service, thus getting a new URL and IP address, and then modify the CNAME or A Record in order to transition clients to the new service.
If you see this issue again you can usually get a fast resolution by opening a support incident - http://azure.microsoft.com/en-us/support/options/
Update: We have a new blog post that describes this scenario and the common causes - http://azure.microsoft.com/blog/2015/03/19/allocation-failure-and-remediation/.
We had a similar allocation failed issue recently deploying our Azure Cloud Service:
Azure Cloud Service Deployment Error
Allocation failed; unable to satisfy constraints in request. The requested new service deployment is bound to an Affinity Group, or it targets a Virtual Network, or there is an existing deployment under this hosted service. Any of these conditions constrains the new deployment to specific Azure resources. Please retry later or try reducing the VM size or number of role instances. Alternatively, if possible, remove the aforementioned constraints or try deploying to a different region.
Allocation Failed - Resolution
Delete Existing Cloud Service
Create New Cloud Service target different Data Center or Resource Group (upload SSL certs required)
Redeploy cloud service package
Relink VSO Team Projects
I suspect the issue has something to do with a corrupt resource group or recent Azure upgrades that were not backwards compatible with older resource groups.
I basically want to do a auto swap between staging and production on azure cloud services.
Basically I have a QA environment that needs a fixed IP address for the QA people to test after a developer finishes a task. Because in staging sometimes the ip changes due to some problems that can occur on the TFS builds I want to be able to have a fixed address for the QA team to access without having to click swap manually.
When you do a VIP SWAP between Production and Staging deployments. The VIPs of both deployments will be EXCHANGED. So Production will become staging and staging will be become Production by IP swap.
If you want your new Staging deployment(previously Production) hold all the latest bits of your application, then you have to re-deploy the application to new staging. This process can be automated through PowerShell. Below resources can be helpful for you to get started with automating deployment process.
To create a new deployment using PowerShell
Run PowerShell script in TFS build process
My first question would be why the staging deployment/slot is having those problems?
If the need is to programmatically perform the VIP swap, I'd think you could add some custom logic to do so via the Windows Azure service management API. Add this into your build definition perhaps (e.g. execute via PowerShell).
I got two data storages in my Windows Azure account. One is for the web application itself, and I created a second one to store files uploaded by the end users.
My question is -is that second data storage actually needed? I created it because every time I deploy my web app, I get the message, "The selected deployment environment is in use, would you like to replace the current deployment?".
I infer from that message that, if I store uploaded files in the web app data storage, they will be replaced on every deployment.
So I am basically trying to confirm if my interpretation is correct. Thank you.
That's telling you that the cloud service environment slot (production or staging) already has a service in it. It has NOTHING to do with the storage account. So no, I dn't elieve you need a second storage account unless you need it specifically for scaling purposes (exceeding the 5k transaction per second limit) or security (controlling access to data).
When you deploy your Windows Azure application from Visual Studio, to a specific service name "xyz" in "production" or "staging" slot, if there is already an application deployed to same service name "xyz" and same "production" or "staging" slot, then you will get such message as "The selected deployment environment is in use, would you like to replace the current deployment?". This is because with Visual Studio the deployment to an existed slot will be redeployed after deleting the deployment.
In this whole situation the "Azure Blob Storage" is only used to place your Azure application package at "vsdeploy" named container.
So you sure can use on single "Windows Azure Storage" name for all of you application and that would be better design.