We have couple of clusters running in Azure using AKS service. In clusters we are running multiple projects, each one of them has different namespace, naturally.
I would like to extract data about number of pods, their status and conditions etc. for specific namespace.
It is fairly easy to do locally using AZ CLI and installing kubectl (to get namespaces names) and then we can either stay in kubectl or use powershell Az.Aks module to extract these information.
I would like to have this process fully automated and scheduled. My first thought was Azure Runbook but it is not supporting AZ CLI.
What would be the best way to achieve what I have mentioned above? I believe the best way would be to use only kubectl commands, but how can we trigger them automatically? Is a pipeline in Azure Devops a solution?
Thanks,
Rafal
You can use Azure Automation via RUnbooks, but just like Azure CLI you were trying you can do it through PowerShell. Runbooks can also be done using Python but for now only Python 2 is supported, and the same task can be done using Azure Functions but with so many different languages like C#, Java, Python and so many more.
Runbooks
https://learn.microsoft.com/en-us/azure/automation/automation-runbook-types#:~:text=PowerShell%20Workflow%20runbooks%20are%20text,the%20runbook%20into%20Azure%20Automation.
Azure Functions
https://learn.microsoft.com/en-us/azure/azure-functions/functions-reference-powershell
Related
I want to tabulate the compute quotas for each Azure ML workspace, in each Azure location, for my organization's Azure subscription. Although it is possible to look at the quotas manually through the Azure Portal (link), I have not found a way to do this with the Azure CLI or Python SDK for Azure. Since there are many resource groups and AML workspaces for different teams under my Azure subscription, it would be much more efficient to do this programmatically rather than manually through the portal. Is this even possible, and if so how can it be done?
It does look like these commands are currently in the CLI or the Python SDK. The CLI uses the Python SDK, so what's missing from one does tend to be missing from the other.
Fortunately, you can invoke the rest endpoints directly, either in Python or by using the az rest command in the CLI.
There are a few commands that may interest you:
Usage and Quotas for a region:
/subscriptions/{subscriptionId}/providers/Microsoft.MachineLearningServices/locations/{location}/usages?api-version=2019-05-01
/subscriptions/{subscriptionId}/providers/Microsoft.MachineLearningServices/locations/{location}/quotas?api-version=2020-04-01
The process for updating REST specs to the offical documentation is fairly lengthy so it isn't published yet, but if you are willing to use Swagger docs to explore what is available, the 2020-06-01 version of the API is on Github, which includes endpoints for updating quotas as well as retrieving them: https://github.com/Azure/azure-rest-api-specs/tree/master/specification/machinelearningservices/resource-manager/Microsoft.MachineLearningServices/stable/2020-06-01
We use Azure Pipeline to implement our Continuous integration pipeline. The app is deployed in virtual machines that we need to provision and configure. There are tones of libraries, patches , configurations , and applications that we need to deploy on the target VM before we get our code into those.
The question is what is the best tool to provision and configure these virtual machines? I was thinking of using Ansible AWX. Basically Azure Pipeline would make a call to the AWX API, which would then take it from there and finalize things.
There is an Azure Pipeline Extension that allows me to execute a playbook https://github.com/microsoft/azure-pipelines-extensions/blob/master/Extensions/Ansible/Src/readme.md. But I would like to use AWX instead so that my ansible/deployment code is decoupled from my pipeline.
Any suggestions?
As far as I know, Ansible allows you to automate the deployment and configuration of resources in your environment. It could meet your needs.
As you said, Azure Pipeline supports to run the playbook in the Ansible task(Ansible extension).
So I think you can directly complete the VM Configuration and Code Deployment in the azure pipeline.
If you want to separate these two steps, you can split them into two pipelines (VM configure and Code Deployment). To avoid confusion between configuration and deployment code, you can also split them into two repos.
On the other hand, if you run the playbook in the azure pipeline, the azure pipeline also supports adding tasks to change the parameters in the playbook(e.g. Replace Token).
Here is an operation guide about using Ansible in Azure Pipeline.
By the way, if the Virtual Machine is Azure VM, you also could use ARM template to update the Azure VM resource.
Personally, I would drop the AWX requirement. It's something else to manage and maintain and an entirely separate interface too. Instead, just do your whole pipeline in one place... azure devops. Pick one or the other. Tower doesn't have a built in source control, so I recommend ADO over it, but they'll both run ansible and they'll both do it on your own control nodes. There's no reason to take an extra step with another tool. It adds way too much complexity.
I would like to perform the following steps on schedule (presumably using Azure Automation):
Provision a VM in Azure
Run a powershell script on that VM
Deprovision VM
Actually I have more steps but left only 3 for simplicity.
I am new to IaC and appreciate your general guidance and advice.
Is it scope of Azure Automation or I need something else?
I would like to code everything in text format and put in Git and update automatically via Pull Requests
Should I use Runbooks or DSC?
Regarding step 2, I cannot figure out how I can upload my powersehll script into newly created VM and run it locally. The script downloads some files and updates some remote resources.
Thanks,
Ruslan
there are a lot of options and tools to achieve your goal.
If you will be working strictly in the Azure cloud, The following tools are most commonly used for building an environment.
Azure-powershell
Azure-CLI
ARM-templates
each of them very similar but all a little different with their own benefits to them, but they are all tools for building your virtual infrastructure. For configuring your resources there are other tools. Like you mentioned yourself, DSC is a tool to configure virtual machines.
if you are planning to use github to push your code, i would recommend using ARM-templates. You can very easily use your own or other templates by referencing in your code. However this might be the 'hardest' solution to learn and understand the syntax in comparison to the cli and powershell. But also the most frequently used.
It is possible to build your environment and configure it in the same script using the Azure-CLI, Azure-Powershell or an other opensource solution like Terraform, But this is not best practice.
A lot of starter scripts are publicly available on github and in the Microsoft docs.
if you have any specific questions you can always send me a message, i am currently working on azure automation myself.
I would like to publish an Azure Managed Application to the Azure Marketplace. Is it possible to add to the "app.zip" an own PowerShell Script, which executes some additional deployment steps besides the Azure Resource Manager Template?
The Script would invoke the arm template and handle some outputs of the Template
The way to think about these is that you can only do tasks that can be done in a template. Today, there's no way to run an arbitrary script in an ARM template.
That help?
After some research and contacting the MS Support I found two possible solutions:
Using a VM with a Custom Script Extension. Downside: VM needs long to startup and is expensive if we do not delete it afterwards.
Using a Azure Container Instance to run the script. Starts up in about 45 seconds and doesn't cost anything if we don't use it. -> Tutorial
I’m trying to provision multiple Azure VMs using DSC (Desire state configuration). Unfortunately all examples and documentation out there show only simple “one machine” provisioning. I have experimented and research it but still cannot make it work.
By any chance, could somebody point me to example showing how to provision multiple VMs with different configurations?
Thanks!
You can use the Azure PowerShell cmdlets to create VMs and set DSC configurations on them. Here is a link to an article which describes the same
http://www.powershellmagazine.com/2014/08/05/understanding-azure-vm-dsc-extension/