I'm trying to replace a hosted service with an empty redirect project, however when I try to do so I get the following error;
Windows Azure cannot perform a VIP swap between deployments that have a different number of endpoints.
I believe the only solutions available to me are the following;
Point DNS to the staging deployment which after 48 hours of
propagation delete the production instance and change my DNS to a
new deployment of the empty redirect project.
Delete the production instance and then immediately flip staging.
This will of course result in downtime.
Unfortunetly changing DNS records isnt an option for me at this stage so unless anyone can suggest an alternative I will have to go with Point #2.
Although my only query with this is once I delete production and flip staging will the new production instance retain the old IP as like I said above im unable to change the DNS records.
Thanks, any queries let me know.
I resolved this issue by adding the additional endpoints to staging through another deployment. Although the application will never use them it allowed me to deploy without downtime.
Alternative solutions are included in my question.
Once the current deployment in the Production slot is deleted, the IP address should be allocated back to the pool (not available to you).
To spare you the hassle of re-deploying and extra 15 min of wait,
you could do it within a minute by deleting PRODCTION slot (if it's applicable !), then you'll do a VIP Swap from
PROD: Empty
STAGING : Your deployment
"OK Deleted the production deployment of cloud service od..."
and end up having
"Successfully swapped the deployments in cloud service od..."
Vincent Thavonekham
This often is a result of having RDP (remote desktop) enabled in one slot but not the other. RDP creates additional endpoints, thus the error message. You can either (a) enable RDP on the one that doesn't have it or (b) disable RDP on the one that does have it. The VIP swap should then work.
Related
So I have been running Azure VM's in the classic portal for a while now but I need to increase the performance on them and I am thinking of moving to the Premium VM's. The problem that I found during testing is that the DNS names have changed. So they aren't 'servicename.cloudapp.net' anymore, they are like, 'servicename.australiaeast.cloudapp.azure.com'. I need to keep the DNS name the same with 'servicename.cloudapp.net'.
I have tried redirecting it through our third party DNS service but it isn't possible.
Is there a way to achieve this?
Thanks in advance
The DNS format for v2 (Resource Manager VMs) is <hostname>.<regionname>.cloudapp.azure.com There is no way to change this.
If you need to keep servicename.cloudapp.net, the only way you can do so is to remain on v1 virtual machines.
edited to address comment
I would imagine that at some point in future v1 VMs will be retired and you will need to figure out how to migrate these users away from the current configuration.
It would be prudent to begin that process now while there is no time pressure.
I would imagine the best way forward would to initially configure a DNS CNAME record to point to the existing database and start migrating users over to that. Once you have transferred everyone you can then switch over to v2 VMs and they'll never notice.
Customers are quite comfortable with the concept of updates, so as long as you make the process as painless as possible for them (i.e. just a single executable etc) then it is unlikely they'll mind. Especially if you can roll out some sort of free upgrade along with it.
It is possible to delete azure app hosting plan to stop being charged, but I am wondering if it's possible to disable it somehow. If you delete it, then you lose configuration, etc. But I did not find any options to kind of turn it off, and then put it back online (I suppose it's possible with virtual machines e.g.), or reduce the instance count to 0 somehow. Did someone find a way to do it?
P.S. Sometimes it's possible to switch to free plan, but e.g. it won't work when you have deployment slots, etc.
The only way to 'turn off' an Azure App Hosting Plan is to delete it! While it is up and running you will be charged.
The best solution you've got is to shrink it down to a single small server. The price for that is reasonably low.
The alternative is to automate the whole thing, If you keep the sites you host in Github etc you can create a hosting plan and deploy deployment slots etc from a script. You can host the scripts in Azure Automation, and it becomes a few minutes to redeploy your site. Although this does take a little time to create.
What is the difference between updating an deployment and deleting and then creating a new deployment for a cloud service ?
We have a cloud service set up which during deployment, first deletes the existing deployment in staging and then creates a new deployment. Due to this the VIP for staging is always changing. We have a requirement where we want to make sure that both the PROD and Staging VIP always remains same.
Before changing the deployment option i would like to know what is the real difference and the need to have these two options.
I tried to search but found nothing on this.
EDIT: In the Azure Pub XML, we have a node named 'AzureDeploymentReplacementMethod' and the different options for this field are 'createanddelete', 'automaticupgrade' and 'blastupgrade'
Right now we are using 'createanddelete' and we are interested to use blastupgrade.
Any help would be much appreciated.
THanks,
Javed
When you use Create&Delete deployment the process simply deletes an existing deployment, then creates new one.
The other two options do upgrade deployment. The difference between automaticupdate and blastupgrade are in the value of Mode element of the Upgrade Deployment operation. As their name suggests, automaticupdate sends Auto for that element. While blastupdate would send Simultaneous. As per documentation:
Mode Required. Specifies the type of update to initiate. Role instances are allocated to update domains when the service is
deployed. Updates can be initiated manually in each update domain or
initiated automatically in all update domains. Possible values are:
Auto
Manual
Simultaneous
If not specified, the default
value is Auto. If set to Manual, WalkUpgradeDomain must be called to
apply the update. If set to Auto, the update is automatically applied
to each update domain in sequence. The Simultaneous setting is only
available in version 2012-12-01 or higher.
You can read more on Update Cloud Service here.
Although, if you really want to persist VIP in all situations, I would suggest you to:
Do not use staging for cloud services at all - just use two separate cloud services (one for production and one for staging)
use the Reserved IP Address feature of the Azure Platform.
My understanding of the VMs involved in Azure Cloud Services is that at least some parts of it are not meant to persist throughout the lifetime of the service (unlike regular VMs that you can create through Azure).
This is why you must use Startup Tasks in your ServiceDefinition.csdef file in order to configure certain things.
However, after playing around with it for a while, I can't figure out what does and does not persist.
For instance, I installed an ISAPI filter into IIS by logging into remote desktop. That seems to have persisted across deployments and even a reimaging.
Is there a list somewhere of what does and does not persist and when that persistence will end (what triggers the clearing of it)?
See http://blogs.msdn.com/b/kwill/archive/2012/10/05/windows-azure-disk-partition-preservation.aspx for information about what is preserved on an Azure PaaS VM in different scenarios.
In short, the only things that will truly persist are things packaged in your cscfg/cspkg (ie. startup tasks). Anything else done at runtime or via RDP will eventually be removed.
See - How to: Update a cloud service role or deployment - in most cases, an UPDATE to an existing deployment will preserve local data while updating the application code for your cloud service.
Be aware that if you change the size of a role (that is, the size of a virtual machine that hosts a role instance) or the number of roles, each role instance (virtual machine) must be re-imaged, and any local data will be lost.
Also if you use the standard deployment practice of creating a new deployment in the staging slot and then swapping the VIP, you will also lose all local data (these are new VMs).
After swapping the latest azure deployment from staging to production, I need to prevent the staging worker role from accessing the queue messages. I can do this by detecting if the environment is staging or production in code, but can anyone tell me if there is a any other way to prevent staging environment from accessing and processing queue messages??
Thanks for the help!
Mahesh
There is nothing in the platform that would do this. This is an app/code thing. If the app has the credentials (for example, account name and key) to access the queue, then it is doing what it was coded to do.
Have your staging environment use the primary storage key and your production environment use the secondary storage key. When you do the VIP swap you can regenerate the storage key that your now-staging environment is using which will result in it no longer having credentials to access the queue.
Notice that this does introduce a timing issue. If you do the swap first and then change the storage keys then you run the risk of the worker roles picking up messages in between the two operations. If you change the keys first and then do the swap then there will be a second or two where your production service is no longer pulling messages from the queue. It will depend on what your service does as to whether or not this timing issue is acceptable to you.
You can actually detect which Deployment Slot that current instance is running in. I detailed how to do this here: https://stackoverflow.com/a/18138700/1424115
It's really not as easy as it should be, but it's definitely possible.
If this is a question of protecting your DEV/TEST environment from your PRODUCTION environment, you may want to consider separate Azure subscriptions (one for each environment). This guide from Patterns and Practices talks about the advantages of this approach.
http://msdn.microsoft.com/en-us/library/ff803371.aspx#sec29
kwill's answer of regenerating keys is a good one, but I ended up doing this:
Optional - stop the production worker role from listening to the queue by changing an appropriate configuration key which tells it to ignore messages, then rebooting the VM (either through the management portal or by killing the WaHostBootstrapper.exe)
Publish to the staged environment (this will start accessing the queue, which is fine in our case)
Swap staged <-> production via Azure
Publish again, this time to the new staged environment (old live)
You now have both production and staging worker roles running the latest version and servicing the queue(s). This is a good thing for us, as it gives us twice the capacity, and since staging is running anyway we may as well use it!
It's important that you only use staging as a method of publishing to live (as it was intended) - create a whole new environment for testing/QA purposes, which has its own storage account and message queues.