I have been working with Azure Policy -> Policy definition for some time.
I use the Get-AzPolicyDefinition power shell command for that reason.
Recently I started working with Initiative which is nice, so I have created a custom Initiative definition. Now I wanted to set initiative for Management Groups in either Powershell or az cli. When I use Get-AzPolicyDefinition, it does not show the Initiative definition.
Is there a command that I am not aware of to archive this?
Note: my goal is to set my custom initiative definition to Management
Group via the command line.
as you can see DefaultPolices that I have created and all default Initiative are not shown in the list when I run this command:
Get-AzPolicyDefinition | ForEach-Object { $_.Properties.DisplayName }
Returns, this is the end of the list, which is the only definition type of policy as you can see in the image below until the red line:
...
...
Kubernetes cluster containers should run with a read only root file system
Kubernetes cluster pods and containers should only use allowed SELinux options
Kubernetes cluster containers CPU and memory resource limits should not exceed the specified limits
[Preview]: Azure Key Vault Managed HSM keys using elliptic curve cryptography should have the specified curve names
Secrets should not be active for longer than the specified number of days
Kubernetes cluster pods and containers should only run with approved user and group IDs
Kubernetes cluster pod FlexVolume volumes should only use allowed drivers
[Preview]: Certificates should not expire within the specified number of days
Kubernetes cluster containers should only use allowed ProcMountType
Kubernetes cluster containers should only use allowed images
Keys using elliptic curve cr
Azure Policy Initiatives are called "Policy Set Definitions" in the Az PowerShell cmdlets - these functions will let you interact with them:
Get-AzPolicySetDefinition
New-AzPolicySetDefinition
Remove-AzPolicySetDefinition
Set-AzPolicySetDefinition
See the Policy section of the Az PowerShell documentation for more details, or use Get-Help in a PowerShell console...
Related
I created an AKS cluster using Azure Machine Learning SDK extension and I attached to the workspace created. When the cluster is created and attached, I doesn't show any error. When I am trying to detach it from workspace, it is not accepting the operations.
I would like to detach the existing AKS cluster from workspace either by program manner, using CLI or even using Azure portal.
If we are using any extensions of SDK or Azure CLI for machine learning to detach AKS cluster, it will not work and it will not get deleted or detached. Instead, we need to use Azure CLI with AKS. There are two types of implementations we can perform.
Python:
Aks_target.detach()
Azure CLI:
Before performing this step, we need to get the details of the working AKS cluster name attached to our workspace. Resource Group details and workspace name
az ml computertarget detach -n youraksname -g yourresourcegroup -w yourworkspacename
I'm trying to find a way to give an entire AKS cluster to Azure Key vault. I have temporarily got this working by following the below process:
Go to the VMSS of the cluster -> Identity -> Set System Assigned Status to 'On'
Add this Managed identity as an access policy to Key Vault.
This works, however whenever I stop and start the cluster, I have to re-create this managed identity and re-add it to Key Vault. I have tried using the User Assigned Identities for the vmss as well but that does not seem to work.
I also cannot use the azure pod identities/CSI features for other reasons so I'm just looking for a simple way to give my cluster permanent access to key Vault.
Thanks in advance
Pod is smallest unit in Kubernetes. Pod is a group of one or more containers that are deployed together on the same host (node).
Pod runs a node which is controlled by master.
Pod uses OS level virtualization which can consume resources of VMSS when it runs and based on requirement.
Stopping and restarting cluster/nodes pod will lose all the resources that leads to loss of pods. So, there will be no pod under VMSS until you restart. In case you restart your cluster/node, the new pod will be created with different name and with another IP address.
From this github discussion, I found that MIC (Managed Identity Cluster) removes the identity from the underlying VMSS when no pods are configured to use that identity. So, you have to recreate the Managed Identity for VMSS.
You can refer this link for better understanding how to access keyvault from Azure AKS.
An Azure virtual machine scale set needs to use secrets from a pre-existing Azure key vault.
Terraform is the tool being used to create the Azure virtual machine scale set.
What specific terraform code must be used in order to configure the virtual machines in the virtual machine scale set to be able to automatically retrieve secrets from a named key vault which exists before the virtual machine scale set is created?
The terraform azurerm_linux_virtual_machine_scale_set resource has a secrets block option that can receive a key_vault_id, but we have not been able to find examples.
What we need is for bash scripts running within the virtual machine scale set to be able to retrieve secrets from the pre-existing key vault. Specifically the cloud-init startup script.
So far, we have been using the CLI to make requests to the key vault in the cloud-init script, but we are getting access denied errors.
The answer to this question should include 1. terraform code for creating the virtual machine scale set, and 2. simple code to retrieve a secret from that key vault that can be added to a bash script running in the virtual machine scale set, such as the cloud-init script.
Does the VMSS need to run as some sort of identity to make this happen?
When deploying new jobs and services to Azure Kubernetes Service cluster, the pods fail to request valid AAD access tokens with all permissions available. If new permissions were added on the same day, before or after a deployment, the tokens still do not pick them up. This issue has been observed with permissions granted to Active Directory Groups over Key Vaults, Storage Accounts, and SQL databases scopes so far.
Example: I have a .NET 5.0 C# API running on 3 pods with antiaffinity rules located each on a separate node. The application reads information from a SQL database. I made a release and added the database permissions afterwards. Things I have tried so far to make the application reset the access tokens:
kubectl delete pods --all -n <namespace> which essentially created 3 new pods again failing due to insufficient permissions.
kubectl apply -f deployment.yaml to deploy a new version of the image running in the containers, again all 3 pods kept failing.
kubectl delete -f deployment.yaml followed by kubectl apply -f deployment.yaml to erase the old kubernetes object and create a new one. This resolved the issue on 2/3 pods, however, the third one kept failing due to insufficient permissions.
kubectl delete namespace <namespace> to erase the entire namespace with all configuration available and recreated it again. Surprisingly, again 2/3 pods were running with the correct permissions and the last one did not.
The commands were ran more than one hour after the permissions were added to the group. The database tokens are active for 24 hours and when I have seen this issue occur with cronjobs, I had to wait 1 day for the task to execute correctly (none of the above steps worked in a cronjob scenario). The validity of the tokens kept changing which implied that the pods are requesting new access tokens, again excluding the most recently added permissions. The only solution I have found that works 100% of the time is destroy the cluster and recreate it which is not viable in any production scenario.
The failing pod from my example was the one always running on node 00 which made me think there may be an extra caching layer on the first initial node of the cluster. However, I still do not understand why the other 2 pods were running with no issue and also what is the way to restart my pods or refresh the access token to minimise the wait time until resolution.
Kubernetes version: 1.21.7.
The cluster has no AKS-managed AAD or pod-identity enabled. All RBAC is granted to the cluster MSI via AD groups.
Please check if below can be worked around in your case.
To access the Kubernetes resources, you must have access to the AKS cluster, the Kubernetes API, and the Kubernetes objects. Ensure that you're either a cluster administrator or a user with the appropriate permissions to access the AKS cluster
Things you need to do, if you haven't already:
Enable Azure RBAC on your existing AKS cluster, using:
az aks update -g myResourceGroup -n myAKSCluster --enable-azure-rbac
Create Role that allows read access to all other Pods and Services:
Add the necessary roles (Azure Kubernetes Service Cluster User Role , Azure Kubernetes Service RBAC Reader/Writer/Admin/Cluster Admin) to the user. See ( Microsoft Docs).
Also check Troubleshooting
Also check if you need to have "Virtual Machine Contributor" and storage account contributer for your resource group containing pods and see if namespace is mentioned in that pod , if you have missed . Stack Overflow refernce.Also do check if firewall is restricting the access to the network in that pod.
Resetting the kubeconfig context using the az aks get-credentials command may clear the previously cached authentication token for some xyz user:
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster --overwrite-existing >Reference
Please do check Other References below:
kubernetes - Permissions error - Stack Overflow
create-role-assignments-for-users-to-access-cluster | microsoft docs
user can't access to AKS cluster with RBAC enabled (github.com)
kubernetes - Stack Overflow
I want to deploy a VM in microsoft's azure with a new size.
Usually I use a json template for the vm with size 'Standard_DS3'
Now I would like to have another one with size a3 'A3', but this causes an error
statusMessage:{"error":{"code":"InvalidParameter","target":"vmSize","message":"The value of parameter vmSize is invalid."}}
So I was wondering where can I find valid vm sizes for deployments in a location and the correct name for the deployment with a template file?
One can list all vm-sizes available in specific location(e.g westus) from Azure CLI 2.0 using following command
az vm list-sizes --location "westus"
Since you mentioned json templates in your question then I assume that you are using Azure Resource Manager to provision resources. If that's the case, you can use the following REST API endpoint to list all available virtual machine sizes in a region.
https://management.azure.com/subscriptions/{subscription-id}/providers/Microsoft.Compute/locations/{location}/vmSizes?api-version={api-version}
This information is accessible using Azure CLI, i.e.: az vm list-sizes --location "eastus"
You can also reference Microsoft documentation to see the list of virtual machine sizes. Sounds like you need to use the "Large" size in your template to provision an A3 Standard VM.
This isn´t always true. I´ve got into a situation where this command gives me a VM size that wasn´t truely avalaible for my location. This is a known issue of the Azure CLI.
Here is the statement from Azure support:
Cause: It is known that the command az vm list-sizes can expose sizes that are actually unavailable and we are working on that situation.
Resolution: The best option is to mitigate this is to cross check the information provided by that query with the restrictions that you have in the subscription that can be analyzed by the command az vm list-skus. For your scenario, you can see the SKU restrictions in West Europe by using the following:
az vm list-skus --location WestEurope --output table
You can use Get-AzureRmVMSize commandlet in PowerShell. This doesn't change too often and I have a .NET library which contains a snapshot of those https://github.com/aloneguid/microsoft-azure-strongtyped