I’m trying to provision multiple Azure VMs using DSC (Desire state configuration). Unfortunately all examples and documentation out there show only simple “one machine” provisioning. I have experimented and research it but still cannot make it work.
By any chance, could somebody point me to example showing how to provision multiple VMs with different configurations?
Thanks!
You can use the Azure PowerShell cmdlets to create VMs and set DSC configurations on them. Here is a link to an article which describes the same
http://www.powershellmagazine.com/2014/08/05/understanding-azure-vm-dsc-extension/
Related
We have couple of clusters running in Azure using AKS service. In clusters we are running multiple projects, each one of them has different namespace, naturally.
I would like to extract data about number of pods, their status and conditions etc. for specific namespace.
It is fairly easy to do locally using AZ CLI and installing kubectl (to get namespaces names) and then we can either stay in kubectl or use powershell Az.Aks module to extract these information.
I would like to have this process fully automated and scheduled. My first thought was Azure Runbook but it is not supporting AZ CLI.
What would be the best way to achieve what I have mentioned above? I believe the best way would be to use only kubectl commands, but how can we trigger them automatically? Is a pipeline in Azure Devops a solution?
Thanks,
Rafal
You can use Azure Automation via RUnbooks, but just like Azure CLI you were trying you can do it through PowerShell. Runbooks can also be done using Python but for now only Python 2 is supported, and the same task can be done using Azure Functions but with so many different languages like C#, Java, Python and so many more.
Runbooks
https://learn.microsoft.com/en-us/azure/automation/automation-runbook-types#:~:text=PowerShell%20Workflow%20runbooks%20are%20text,the%20runbook%20into%20Azure%20Automation.
Azure Functions
https://learn.microsoft.com/en-us/azure/azure-functions/functions-reference-powershell
I am new to Terraform and was wondering if we can use Terraform to implement a kind of disaster recovery for Azure API manager.
I know there is disaster recovery implementation by Microsoft for API manager but I wanted to explore if I can just recreate the whole thing using Terraform.
I am able to recreate the API manager using Terraform with the same configuration/APIs etc.
The only thing which is unclear to me how to back up and recreate the same subscriptions/products in API manager using Terraform.
For example, if someone deletes the API manager, I want to recreate it using Terraform and import all the existing products/subscriptions (keys).
Any ideas?
Similar to using ARM Templates, you can use Terraform to deploy Azure APIM as well. You refer the azurerm provider docs for more information.
But for all runtime data like users & subscriptions, you will have to consider setting up a backup/restore system utilizing the built-in feature.
After deploying APIM using terraform, you will have to restore the runtime data separately. Also, depending on your Recovery Time Objective, you will have to take frequent backups.
PS: Logic Apps are a great way to setup automatic backups. There is an official sample that you can refer to for this.
I am trying to use Terraform to create a Service fabric cluster in Azure.
I have created configurations for the follwoing resources using a template provided by Tvo https://github.com/TrevorVonSeggern/ServiceFabric_Terraform
This will create the reasorces in Azure however the SFC just sits on "Deploying" and the Nodes themselves never display.
There seems to be a distinct lack of configuration resources for creating a Service fabric cluster using Terraform and HashiCorp's documentation on this resource example is not as in depth as for other resources.
Provisioning with Powershell is easier as more resources to guide.
If anyone has any working examples please can you share them?
Thanks
I have managed to deploy this successfully by deploying and then going through the extensions in the ARM template. Then adding (in JSON string) in the Terraform config for VMSS
Could not find anywhere in the Terraform documentation on this resource to assist with this.
Building up Azure network security groups and rules requires a fair amount of work. My question is whether there is a way to backup them up.
I came across Get-AzureVNetConfig -ExportToFile which is a convenient way backup vnet settings. Restore can be done with Set-AzureVNetConfig -ConfigurationPath. In reality, this give a very nice way of writing a VNet spec (in XML) for a VNet.
I seeking for a XML based way of writing NSG rules. So I can backup/restore it at will.
I would recommnd using Azure Resource Manager templates for this purpose. You can build json files that describe the NSG. Examples of templates with nsg can be found in the quickstart gallery.
More about Azure Resource Manager
Essentially everything in Azure is ARM scripts behind the scene. That is what declarative syntax is all about. So one [hardwork] way would be -
Use Azure PowerShell to export NSG ARM scripts with its rules
The powershell should also export associated Subnet or NIC information for better information capturing.
Schedule that script using Azure runbook in Azure automation account to export everyday/ every few hours.
Make sure that exported ARM scripts are uploaded to Azure File Storage or Blob Storage
There is also another easy way by using readymade solution of marketplace which does all of this for you - https://azuremarketplace.microsoft.com/en-in/marketplace/apps/bowspritconsultingopcprivatelimited1596291408582.nsgbackup?tab=Overview
Thanks.
I deployed a Java application to Windows Azure cloud using Virtual Machine by following this video: https://www.youtube.com/watch?v=h8OyTfQPn1I. Now I want to apply autoscaling. However when I go to the "scale" tab, the options are greyed out and I am unable to turn on scaling.
P.S.- I have already created an availability set and added standard instance VMs of same configuration to it.
Please do help.
Thanks! :)
Autoscaling in Azure Virtual Machines requires that you pre-create all the VMs in an availability set, and then shutdown/deallocate those not currently needed. I suspect that your VMs are not in an availability set.