We are in the process of implementing DSC in our organization. We are able to create and deploy configurations successfully in our non-prod test environment.
Before we implement DSC in PROD environment, our management need to integrate this with ITSM/Change management. So that everything has a Change Ticket (we are using ServiceNow). We can take care of this during the creation and deployment of DSC Configurations.
However, the actual problem is when DSC Configuration is deployed and it is in action. How do we integrate the ITSM/Change management and logging mechanism?
Let me give an example, suppose we have SERVER1 for which we have created a configuration to make sure that ‘TapiSrv’ is always in ‘Stopped’ state. Now due to some requirement user X has created a change ticket to Start this service. He has started the service as per the Change Ticket successfully. Now, when the LCM triggers DSC Configuration to restore the service to it’s original state i.e. ‘Stopped’. User don’t have a clue why this has happened and we don’t have any Change ticket before LCM restores the service to it’s original state. This change was happened without a Change Ticket or any logging mechanism.
Can we integrate some code to be executed before LCM restores/reverts the changes made to the Service so that we can do two things, create a Change Ticket programmatically and creating a DB entry before actually restoring the configuration.
We can take care of writing code to create a change ticket and making DB entry but how we can trigger that code before LCM restores the configuration.
This will also help us generate a report about how many times the server drifted from the configuration and LCM has restored it back.
I don't know how to trigger some code before LCM brings anything back to it's original state.
You can do this by having LCM in consistency check only and then when reports arrive at your db connected to dsc server you create ticket, after procesing change ticket you will just invoke dsc apply manualy.
Related
I'm planning to migrate an Azure VM Classic to ARM.
In the "Migrate to ARM" page on Azure Portal, i have passed the (1) validation and then completed the (2) Prepare step without problems. Now the page has two green checks and says:
Prepare operation completed.
Your resources will be migrated to the following resource group(s). You may wish to examine them to ensure the results are as expected.
The question is how to examine the results? What results to examine?
Does it mean the contents of VM? At the prepare step the VM stopped as i expected. How can i test the web applications or files inside VM while it is closed?
Are there any other checks i should do after successful Prepare and before Commit?
The VM concerns production server and we must guarantee its health and quick return to uptime before pressing "yes" and "Commit".
The error indicates that you might want to double check if all the resources showing ready for the move are the ones that you expect to migrate to desired resource group for the conversion of classic to ARM.
Saying examine them means review the selected resources
Message: Your resources will be migrated to the following resource group(s). You may wish to examine them to ensure the results are as expected.
How can i test the web applications or files inside VM while it is closed?
You'll have to ensure the web application is working before starting the migration. Ensure all files are intact.
You'll not be able to test it while the VM is off.
Are there any other checks i should do after successful Prepare and before Commit?
None, you can commit as long as there are specific validation errors.
I would recommend going through the Technical deep dive on platform-supported migration from classic to Azure Resource Manager
Wishing GoodLuck with the migration!
Currently, we are hosting five websites on a Linux VM. The websites reside in their separate directories and are hosted by Nginx. The SSL is terminated at Azure Application gateway which sends the traffic to the VM. If a file is updated in a remote repository, the local copy is updated by a cron task which is a simple Bash script running git pull and few additional lines. Not all five websites need to be updated at the same time.
We created the image of the VM and provisioned a VMSS set up.
What could be the easiest or standard way of deploying the codes to the VMSS? The codes also need some manual changes each time due to client's requirements.
Have a look into Azure Durable Functions as an active scripted deployment manager.
You can configure your Durable Function to be triggered via a cron schedule, then it can orchestrate a series of tasks, monitoring for responses from the deployment targets for acceptable response before continuing each step or even waiting for user input to proceed.
By authoring your complex workflow using either of C#/JavaScript/Python/PowerShell you are only limited by your own ability to transform your manual process into a scripted one.
Azure Functions is just one option of many, it really comes down to the complexity of your workflow and the individual tasks. Octopus Deploy is a common product used to automate Azure application deployments and may have templates that match your current process, I go straight to Durable Functions when I find it too hard to configure complex steps that involve waiting for specific responses from targets before proceeding to the next step, and I want to use C# to evaluate those responses or perhaps reuse some of my application logic as part of the workflow.
What is the difference between updating an deployment and deleting and then creating a new deployment for a cloud service ?
We have a cloud service set up which during deployment, first deletes the existing deployment in staging and then creates a new deployment. Due to this the VIP for staging is always changing. We have a requirement where we want to make sure that both the PROD and Staging VIP always remains same.
Before changing the deployment option i would like to know what is the real difference and the need to have these two options.
I tried to search but found nothing on this.
EDIT: In the Azure Pub XML, we have a node named 'AzureDeploymentReplacementMethod' and the different options for this field are 'createanddelete', 'automaticupgrade' and 'blastupgrade'
Right now we are using 'createanddelete' and we are interested to use blastupgrade.
Any help would be much appreciated.
THanks,
Javed
When you use Create&Delete deployment the process simply deletes an existing deployment, then creates new one.
The other two options do upgrade deployment. The difference between automaticupdate and blastupgrade are in the value of Mode element of the Upgrade Deployment operation. As their name suggests, automaticupdate sends Auto for that element. While blastupdate would send Simultaneous. As per documentation:
Mode Required. Specifies the type of update to initiate. Role instances are allocated to update domains when the service is
deployed. Updates can be initiated manually in each update domain or
initiated automatically in all update domains. Possible values are:
Auto
Manual
Simultaneous
If not specified, the default
value is Auto. If set to Manual, WalkUpgradeDomain must be called to
apply the update. If set to Auto, the update is automatically applied
to each update domain in sequence. The Simultaneous setting is only
available in version 2012-12-01 or higher.
You can read more on Update Cloud Service here.
Although, if you really want to persist VIP in all situations, I would suggest you to:
Do not use staging for cloud services at all - just use two separate cloud services (one for production and one for staging)
use the Reserved IP Address feature of the Azure Platform.
I'm trying to create a startup script or task that changes the configuration settings in the cscfg file, that executes on role start.
I'm able to access these settings, but haven't been able to successfully change them. I'm hoping for pointers on how to change settings on Role Start, or if it's even possible.
Thanks.
EDIT: What I'm trying to accomplish
I'm trying to make a service to more easily configure configuration values on Azure applications. Right now, if I want to change a setting that it the same over 7 different environments, I have to change it in 7 different .cscfg files.
My thought is I can create a webservice, that the application will query for its configuration values. The webservice will look in a storage place, like Azure Tables, and return the correct configuration values. This way, I can edit just one value in Tables, and it will be changed in the correct environments much more quickly.
I've been able to integrate this into a deployment script pretty easily (package the app, get the settings, change the cscfg file, deploy). The problem with that is every time you want to change a setting, you have to redeploy.
Black-o, given that your desire appears to be to manage the connection settings among multiple deployments (30+), I would suggestion that perhaps your need would be better met by using a separate configuration store. This could be Azure Storage (tables, or perhaps just a config file in a blob container), a relational database, or perhaps even an external configuration service.
These options require only a minimum amount of information to be placed into the cscfg file (just enough to point at and authorize against the configuration store), and allow you to maintain all the detail settings side by side.
A simple example might use a single storage account, put the configuration settings into Azure Tables, and use a "deployment" ID as the partition key. The config file for deployment then just needs the connection info for the storage location (unless you want to get by with a shared access signature), and its deployment ID. Then can then retrieve the configuration settings on role startup and cache them locally for performance improvements (either in a distributed memory cache or perhaps on the temp "local storage" drive for each instance).
The code to pull all this together shouldn't take more then a couple hours. Just make sure you also account for resiliency in case your chosen configuration provider isn't available.
The only way to change the settings during runtime is via Management API - craft the new settings and execute "Update Deployment" operation. This will be rather slow because it honors update domains. So depending on your actual problem there might be a much better way to solve it.
Let's say I have a production and staging deployment both using their own (SQL Azure) databases. If the schema in staging has changed and needs to be deployed to production is there a defined way of achieving the database upgrade on the production database (without downtime)?
e.g. If I swap VIP staging <-> production (and at same time automate changing connection strings somehow) what is the best process to automate upgrading the sql azure database.
My thought would be to spot the environment change in RoleEnvironmentChanging (though not sure that VIP swap even fires RoleEnvironmentChanginng) and run the sql script against the to-be database (i.e. prod) at that point, however I need to make sure that script is only run once and there will be multiple instances transitioning.
So you have production deployment which has its own SQL Azure database and staging deployment which has its own SQL Azure database. In this situation both the application have their connection string pointing to two different databases.
Your first requirement is to change the Database schema on fly when you swap the deployment or do something and I have the following concern with that design:
If you write any code inside the role to do "ONCE and only ONCE" action, there is no guarantee that that this will happen only ONCE. It will happen multiple time depend on several scenario such as
1.1 In any situation you VM needs to be reimage by the system and this CODE will do the exactly same what it did during last reimage
1.2 You might protect it to not happen at role start or VM start by some registry method of some external key but there is full proof mechanism that not to happen.
Because of it I would suggest when you are ready to SWAP your deployments you can:
2.1 Run the script to update to the production related SQL Azure schema (This will have no impact on application download because it is not touched but while your database schema is updated, you may know better how it impact your application)
2.2 Change the configuration in staging deployment to point to production SQL Azure (This will not have any production application downtime at all)
2.3 SWAP the deployment (This will also have no application downtime)
So even when you manually update the DB Schema and then SWAP the deployment there is no significant downtime besides the time take by DB to update the schema.
I have been looking on best practices for this all over the place and have found none. So far this is what I do:
Deploy to staging (Production is already running)
Copy app_offline.htm file to the web root on Production. This way I block users from using the application, thus blocking changes to the database. I am using only one instance.
Backup the database.
Run DDL, DML and SP scripts. This updates the production database to the latest schema.
Test application on Staging.
Swap VIP. This brings the application back online since the app_offline.htm file is not present on Staging (new Production).
If something goes wrong, swap VIP again, restore database and delete app_offline.htm.
With this approach I have a downtime of ~5 minutes approximately; my database is small, which is better than waiting for the Vm to be created and users getting errors.