I am working on creating an Azure Scaleset for an application. I have it set up so that when I scale out, it downloads this powershell script I've used in the past that will:
download the CI/CD deploy agent to the new VM, install, and fully configure it.
register the new deploy target with my CI/CD pipeline, and configure it to be labeled as my app.
That's all it does. From there, my CI/CD pipeline will auto-deploy my program on to this new machine and configure it for me when it detects the new target for the app. That part works fine. When I first set it up a few weeks ago, it was working fine and it was scaling out and auto-deploying perfectly with no issues.
I noticed today that I suddenly get a 403 trying to download the extension to the machine on provision during scale-out, and it wasn't doing this a couple of days ago. It's like my extension expired or something. This puts my scaleset into a failed state until it scales in and only the original VMS that are always there as a baseline are left.
I've re-installed this and it works, but after a period of time it breaks again. I looked into the JSON of my scaleset, and it has a storage account that looks like iaasv2tempstore as it's name, so this leads me to believe that all extensions are not permanent.
This leaves me with 2 questions:
what is the average lifetime expectancy of a script extension before it's invalidated by Azure deleting it?
Is there a workaround or alternative that allows me to make this permanent, or store it somewhere I have control over to make the script extension not need to be reinstalled frequently?
it doesnt expire. if it gets 403 when trying to download from the blob - probably means you are using SAS token to auth and it expires. If url is accessible to the script extension it wont stop working (unless you block it from talking to Azure, for example).
I ended up following some of what was done here in order to fix the issue.
General Workaround for Azure's lackluster custom extension ui:
Upload the script into blob storage yourself to an account that the VMSS could access.
Run this AZ powershell script. It's not polished enough to handle updating the instances if are in the process of scaling in or out, but it will get the job done.
$fileUri = #("https://somebloburi/blob/filename.ps1")
$storageAcctName = "account"
$storageKey = "accountkey"
$settings = #{"fileUris" = $fileUri; "commandToExecute" = "powershell -ExecutionPolicy Unrestricted -File filename.ps1"};
$protectedSettings = #{"storageAccountName" = $storageAcctName; "storageAccountKey" = $storageKey};
$myVMSS = Get-AzVmss -ResourceGroupName "VMSS-RG" -VMScaleSetName "myVmss"
$myVMSS = Add-AzVmssExtension -VirtualMachineScaleSet $myVMSS -Name "CustomScriptExtension" -Publisher Microsoft.Compute -Type "CustomScriptExtension" -TypeHandlerVersion "1.7" -AutoUpgradeMinorVersion $True -Setting $settings -ProtectedSetting $protectedSettings
Update-AzVmss -ResourceGroupName "VMSS-RG" -VMScaleSetName myVmss -VirtualMachineScaleSet $myVMSS
if you don't script out the auto-upgrade, you can just do it yourself from the UI.
Related
Few days ago I moved my service from Azure Cloud Services classic to Cloud Services extended support. The latest doesn't have Production/Staging slots. There is a new swap mechanism that is activated if during a deploy we configured the "swappable cloud service". I can do it using Visual Studio publish magic and it works fine.
Now I want to make a deploy using powershell script. The code below just creates a new deploy without activated swap. It works fine.
New-AzCloudService -Name $stagingName `
-ResourceGroupName $resourceGroupName `
-Location $location `
-ConfigurationFile $cscfgFilePath `
-DefinitionFile $csdefFilePath `
-PackageFile $cspkgFilePath `
-StorageAccount $storageAccount `
-KeyVaultName $keyVaultName
I didn't find any samples or clues of how to add the "swappable cloud service" to the New-AzCloudService. I figured out there is such settings in
NetworkProfile.SwappableCloudService.Id but I can't understand how to set it up properly. For example, if I add:
$production= Get-AzCloudService -ResourceGroup $resourceGroupName -CloudServiceName $productionName
$production.NetworkProfile.SwappableCloudService.Id = $production.Name # just to reuse the object
$loadBalancerConfig = CreateLoadBalancerConfig
$networkProfile = #{loadBalancerConfiguration = $loadBalancerConfig; swappableCloudService = $production.NetworkProfile.SwappableCloudService }
New-AzCloudService -Name $stagingName `
...
-NetworkProfile $networkProfile `
I got the error:
New-AzCloudService : Parameter set cannot be resolved using the specified named parameters
Is it possible to set the "swappable cloud service" for New-AzCloudService? How to do it?
Is it possible to set the "swappable cloud service" after the deploy (in any way, Azure portal, API, powershell, etc.)?
I suspect the issue relates to the -NetworkProfile not being supported when -ConfigurationFile, -DefinitionFile, and -PackageFile are used.
In the documentation, -NetworkProfile is used with -PackageUrl and -Configuration instead.
I've asked Microsoft about this as I am having the same issue.
Try $production.Id instead of $production.Name. Or $production.id (I am not sure if the case is important).
Using my Google Fu, I am not finding any information on how to troubleshoot a corrupted Azure virtual machine. It has been working fine for almost 8 months and I have had to redeploy it 1-2 before because it would not start.
Any helpful pointers or point me to some troubleshooting steps please.
I did a redeploy of my Azure virtual machine and perhaps did not wait long enough before shooting off the support request and creating this post as I am able to access my VM again. I received the following troubleshooting steps from Azure support professional so I am sharing here in case anyone else runs into these issues:
To over come this issue we can proceed to run a series of PowerShell scripts to force update the actual status of the VM on Azure backend, please follow this process:
To easily complete this, we can launch Azure PowerShell, please go to:
https://github.com/Azure/azure-powershell
Click on “Launch Cloud Shell” button:
On the newly open window, click on “PowerShell” link:
Then you can run the script, as is, line per line:
$vmName = "(Your VM NAME)"
$rgName = "(Your resource Group)"
$vm = Get-AzureRmVM -ResourceGroupName $rgName -Name $vmName
update-AzureRmVM -ResourceGroupName $rgName -VM $vm
This should bring the VM to healthy or ready state.
Please let me know the outcome of the PS command, if successful try to start the VM.
If it still shows as “corrupted” please share with support a screenshot of the error.
im currently looking to move our VM's into a Scale Set,
But i am facing an issue with updating the VM's.
I’ve got a base Image from which I spin up a ScaleSet having 5 instances. Now I have an application update that needs to be pushed to each of these 5 servers, what will be the most suitable and convenient process to achieve this.
I had done some research on this and one of the possible solutions was to ;
Create a New image with the updated application code
Run a Powershell script in templates which replaces the old Image with the newer image and update the Vm’s accordingly.
I am using asp.net for my application. So how do i go about updating each of the VM's in a scale set when ever there is an application update.
I was advised that we could use chef/puppet, but this will work out too expensive at $120 per node
Could someone please suggest a simpler solution. Any help is much appreciated
Use a script\dsc extension to push updates to your app. the process is straight forward and work exactly the same as single VM.
https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-dsc
The scale set "rolling upgrade" feature (currently in preview: https://github.com/Azure/vm-scale-sets/tree/master/preview/upgrade) could probably help; with this feature you simply create your new image, then update the scale set model with the new image, then the scale set will roll out the new image in batches over your infrastructure.
Hope this helps!
Use powershell to deploy to the scaleset. Works like a charm for me :)
$customConfig = #{
"fileUris" = #("https://$storageAccountName.blob.core.windows.net/scripts/script.ps1");
"commandToExecute" = "PowerShell -ExecutionPolicy Unrestricted .\script.ps1";
};
$vmss = Get-AzureRmVmss -ResourceGroupName $resourceGroup -VMScaleSetName $vmssname
Add-AzureRmVmssExtension -VirtualMachineScaleSet $vmss -Publisher Microsoft.Compute -Type CustomScriptExtension -TypeHandlerVersion 1.8 -Name "runscript" -Setting $customConfig
# Send the new config to Azure
Update-AzureRmVmss -ResourceGroupName $resourceGroup -Name $vmssname -VirtualMachineScaleSet $vmss
I have a bizsparkplus subscription. I followed the below link but could not find the capture button:
https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-windows-classic-capture-image/#next-steps
Any suggestions?
Until yesterday, I found nothing in Azure's new portal that can help capture an image from a VM,
but today, out of the blue, I found a "Capture" button right inside the VM's Overview tab :) (will appear only for VMs with "Managed Disks")
It is super easy to capture an image (takes like a minute), but you have to connect ssh on the VM first & run this command (which will delete user's home folder) as per this official article:
sudo waagent -deprovision+user -force
then you can use the button.
-- Warning --
Capturing an image from your VM will stop it and mark it as generalized which won't let you start this VM ever again, because being generalized is an irreversible process "by design"! .. the Sh***y thing is, they didn't even put a warning on the button!, so be careful with that!
This feature is not available yet in the new Azure Portal. You have to options : Azure Resource explorer or PowerShell.
Here is an example in powershell. In this example the custom image will be saved in the VM storage account. The vm custom image will be stored in the following location "System/Microsoft.Compute/Images/templates/***.vhd". :
$vmResourceGroup = "iaas-rg";
$vmName = "ubuntu";
$destinationContainerName = "templates";
$vhdNamePrefix = "template";
$sampleOutupTemplatePath = "C:\Templates\ImagesGeneralized\sampleOutputTemplateUbuntu.json";
Login-AzureRmAccount
#Dellocate the VM
Stop-AzureRmVM -ResourceGroupName $vmResourceGroup -Name $vmName
#Generalize the vm
Set-AzureRmVM -ResourceGroupName $vmResourceGroup -Name $vmName -Generalized
# Save the custom vm Image
Save-AzureRmVMImage -ResourceGroupName $vmResourceGroup -VMName $vmName -DestinationContainerName $destinationContainerName -VHDNamePrefix $vhdNamePrefix -Path $sampleOutupTemplatePath
The second option is to use Azure Resource Explorer, you can execute the operations manually * :
*To execute those operations, the mode "read/write" must be selected in Azure Resource Explorer.
Regards,
You can able to capture the classic virtual machines in new portal. There is an option for capturing. attaching is the screenshot.
In case anyone else looking for this topic, I was researching it and found some azure docs here:
Create a managed image of a generalized VM in Azure
which explains generalizing is a one way operation (and why)
and how to deal with that by creating a VM copy first
Create a Windows VM from a specialized disk by using PowerShell
Using the option 3 in the article.
I have created 2 Virtual Machines (VMs) using RM (Resource Manager) deployment method. How can I clone a VM using Azure PowerShell scripts?
You can follow the documentation guide here: https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-windows-capture-image/
After you Open the Azure PowerShell and login to your Azure account.
You can find the subscriptions your Azure account has by using the command Get-AzureRmSubscription.
Next you will need to deallocate the resources used by this virtual machine.
Stop-AzureRmVM -ResourceGroupName YourResourceGroup -Name YourWindowsVM
Next you need to set the status of the virtual machine to Generalized. Note that you will need to do this because the generalization step above (sysprep) does not do it in a way that Azure can understand.
Set-AzureRmVm -ResourceGroupName YourResourceGroup -Name YourWindowsVM -Generalized
Next, Capture the virtual machine image to a destination storage container using this command.
Save-AzureRmVMImage -ResourceGroupName YourResourceGroup -VMName YourWindowsVM -DestinationContainerName YourImagesContainer -VHDNamePrefix YourTemplatePrefix -Path Yourlocalfilepath\Filename.json
The -Path variable is optional and can be used to save the JSON template locally. The -DestinationContainerName variable is the name of the container that you want to hold your images in. The URL of the image stored will be similar to https://YourStorageAccountName.blob.core.windows.net/system/Microsoft.Compute/Images/YourImagesContainer/YourTemplatePrefix-osDisk.xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.vhd. It will be created in the same storage account as that of the original virtual machine.
The Azure documentation offers a detailed guide to do this, so follow the instructions and in case of trouble, you can ask me!
BEWARE! When you sysprep your machine and make it generalized, you CANNOT UnDO this!!!
This means that you VM will stay stopped (deallocated) and you can create infinite cloned from this, but you cannot start this again, so make sure it is not in production :)