I'm using this repo to create a kubernetes cluster on Azure using acs-engine.
I am wondering if anyone can help me identify how to reference the master VM's public IP address.
This would be used to ssh into the master VM (ssh user#public-ip), which is important because I want to run local-exec provisioners to configure my cluster with Ansible.
I don't believe that it is the first_master_ip in the below main.tf (this is given a value on the repo's variables.tf), though I also don't know how to reference this IP as well.
One other thing that I have tried is to obtain the master VM public IP address using the azure command line, however I also haven't had any success with this because I don't know how to get the cluster-name, which would be passed in with az acs kubernetes browse -g <resource-group-name> -n <cluster-name>
Any help would be greatly greatly appreciated as I've really hit a road block with this.
provider "azurerm" {
subscription_id = "${var.azure_subscription_id}"
client_id = "${var.azure_client_id}"
client_secret = "${var.azure_client_secret}"
tenant_id = "${var.azure_tenant_id}"
}
# Azure Resource Group
resource "azurerm_resource_group" "default" {
name = "${var.resource_group_name}"
location = "${var.azure_location}"
}
resource "azurerm_public_ip" "test" {
name = "acceptanceTestPublicIp1"
location = "${var.azure_location}"
resource_group_name = "${azurerm_resource_group.default.name}"
public_ip_address_allocation = "static"
}
data "template_file" "acs_engine_config" {
template = "${file(var.acs_engine_config_file)}"
vars {
master_vm_count = "${var.master_vm_count}"
dns_prefix = "${var.dns_prefix}"
vm_size = "${var.vm_size}"
first_master_ip = "${var.first_master_ip}"
worker_vm_count = "${var.worker_vm_count}"
admin_user = "${var.admin_user}"
ssh_key = "${var.ssh_key}"
service_principle_client_id = "${var.azure_client_id}"
service_principle_client_secret = "${var.azure_client_secret}"
}
}
# Locally output the rendered ACS Engine Config (after substitution has been performed)
resource "null_resource" "render_acs_engine_config" {
provisioner "local-exec" {
command = "echo '${data.template_file.acs_engine_config.rendered}' > ${var.acs_engine_config_file_rendered}"
}
depends_on = ["data.template_file.acs_engine_config"]
}
# Locally run the ACS Engine to produce the Azure Resource Template for the K8s cluster
resource "null_resource" "run_acs_engine" {
provisioner "local-exec" {
command = "acs-engine generate ${var.acs_engine_config_file_rendered}"
}
depends_on = ["null_resource.render_acs_engine_config"]
}
I have no experience with terraform but acs-engine sets up a lb with a public ip that goes through your master (or balances across multiple masters). You find the ip of that lb by using <dns_prefix>.<region>.cloudapp.azure.com.
But if you need the ip to provision something extra, this won't be enough when you have multiple masters.
Related
When I configure Azure Monitoring using the OMS solution for VMs with this answer Enable Azure Monitor for existing Virtual machines using terraform, I notice that this feature is being deprecated and Azure prefers you move to the new monitoring solution (Not using the log analytics agent).
Azure allows me to configure VM monitoring using this GUI, but I would like to do it using terraform.
Is there a particular setup I have to use in terraform to achieve this? (I am using a Linux VM btw)
Yes, that is correct. The omsagent has been marked as legacy and Azure now has a new monitoring agent called "Azure Monitor agent" . The solution given below is for Linux, Please check the Official Terraform docs for Windows machines.
We need three things to do the equal UI counterpart in Terraform.
azurerm_log_analytics_workspace
azurerm_monitor_data_collection_rule
azurerm_monitor_data_collection_rule_association
Below is the example code:
data "azurerm_virtual_machine" "vm" {
name = var.vm_name
resource_group_name = var.az_resource_group_name
}
resource "azurerm_log_analytics_workspace" "workspace" {
name = "${var.project}-${var.env}-log-analytics"
location = var.az_location
resource_group_name = var.az_resource_group_name
sku = "PerGB2018"
retention_in_days = 30
}
resource "azurerm_virtual_machine_extension" "AzureMonitorLinuxAgent" {
name = "AzureMonitorLinuxAgent"
publisher = "Microsoft.Azure.Monitor"
type = "AzureMonitorLinuxAgent"
type_handler_version = 1.0
auto_upgrade_minor_version = "true"
virtual_machine_id = data.azurerm_virtual_machine.vm.id
}
resource "azurerm_monitor_data_collection_rule" "example" {
name = "example-rules"
resource_group_name = var.az_resource_group_name
location = var.az_location
destinations {
log_analytics {
workspace_resource_id = azurerm_log_analytics_workspace.workspace.id
name = "test-destination-log"
}
azure_monitor_metrics {
name = "test-destination-metrics"
}
}
data_flow {
streams = ["Microsoft-InsightsMetrics"]
destinations = ["test-destination-log"]
}
data_sources {
performance_counter {
streams = ["Microsoft-InsightsMetrics"]
sampling_frequency_in_seconds = 60
counter_specifiers = ["\\VmInsights\\DetailedMetrics"]
name = "VMInsightsPerfCounters"
}
}
}
# associate to a Data Collection Rule
resource "azurerm_monitor_data_collection_rule_association" "example1" {
name = "example1-dcra"
target_resource_id = data.azurerm_virtual_machine.vm.id
data_collection_rule_id = azurerm_monitor_data_collection_rule.example.id
description = "example"
}
Reference:
monitor_data_collection_rule
monitor_data_collection_rule_association
I am creating a VPN using a script in Terraform as no provider function is available. This VPN also has some other attached resources like security groups.
So when I run terraform destroy it starts deleting the VPN but in parallel, it also starts deleting the security group. The security group deletion fails because those groups are "still" associated with the VPN which is in the process of deletion.
When I run terraform destroy -parallelism=1 it works fine, but due to some limitations, I cannot use this in prod.
Is there a way I can enforce VPN to be deleted first before any other resource deletion starts?
EDIT:
See the security group and VPN Code:
resource "<cloud_provider>_security_group" "sg" {
name = format("%s-%s", local.name, "sg")
vpc = var.vpc_id
resource_group = var.resource_group_id
}
resource "null_resource" "make_vpn" {
triggers = {
vpn_name = var.vpn_name
local_script = local.scripts_location
}
provisioner "local-exec" {
command = "${local.scripts_location}/login.sh"
interpreter = ["/bin/bash", "-c"]
environment = {
API_KEY = var.api_key
}
}
provisioner "local-exec" {
command = local_file.make_vpn.filename
}
provisioner "local-exec" {
when = "destroy"
command = <<EOT
${self.triggers.local_script}/delete_vpn_server.sh ${self.triggers.vpn_name}
EOT
on_failure = continue
}
}
newbie here to don't punch too hard please.
The big picture goal is: automating the process of provisioning a new SSL cert from Lets encrypt, storing the cert in Azure key vault and then propagating it to a bunch of azure VMs. We have a solution in place but of course it was created by people who are no longer with the organization and now we're trying to improve it to work for our scenario. The current problem is: terraform is trying to delete an existing cert from Azure key vault and then create a new one with the same name. Of course, soft delete and purge protection are both enabled and (!) it's a group policy imposed by AAD so they can't be disabled to create a new vault without them. I've read here that it's possible to work around the issue using terraform's null resource but I know nothing about it to be able to use it. Does anyone have an idea how exactly this could be achieved? Please feel free to ask further questions, I'll answer what I can. Thanks!
To use null_resource first, we must define the null_resource provider in our main.tf file.See Terraform null provider and null_resource explained | by Jack Roper | FAUN Publication
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 2.43"
}
null = {
version = "~> 3.0.0"
}
}
Then if you have defined the resources keyvault , certificate..
resource "azurerm_key_vault" "kv" {
name = "ansumankeyvault01"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
....
tenant_id =
}
resource "azurerm_key_vault_certificate" "example" {
name = "generated-cert"
key_vault_id = azurerm_key_vault.example.id
certificate_policy {
...
}
key_properties {
...
}
lifetime_action {
action {
action_type = "AutoRenew"
}
trigger {
days_before_expiry = 30
}
...
}
Please check the workaround in
terraform-provider-azurerm/issues as you mentioned.
Refer : null_resource | Resources | hashicorp/null | Terraform Registry /
Powershell using local-exec provisioner :
resource "null_resource" "script" {
provisioner "local-exec" {
command =
“Add-AzKeyVaultCertificate -VaultName ${azurerm_key_vault.kv.name}
-Name ${ azurerm_key_vault_certificate.example.anme } -CertificatePolicy ${ azurerm_key_vault_certificate.example.certificate_policy}”
interpreter = ["powershell", "-Command"]
depends_on = [azurerm_key_vault_certificate.example]
}
}
or using az cli commands.
resource "null_resource" "nullrsrc" {
provisioner "local-exec" {
command = " az keyvault certificate create --name
--policy ${ azurerm_key_vault_certificate.example.certificate_policy}
--vault-name ${azurerm_key_vault.kv.name}
Depends_on = [azurerm_key_vault_certificate.example]
}
}
Or you can create a powershell command script to reference
resource "null_resource" "create_sql_user" {
provisioner "local-exec" {
command = ".'${path.module}\\scripts\\update_cert_version.ps1' "
interpreter = ["pwsh", "-Command"]
}
depends_on = [azurerm_key_vault_certificate.example]
}
References
Azure CLI or PowerShell command to create new version of a
certificate in keyvault - Stack Overflow
Execute AZ CLI commands using local-exec provisioner · Issue #1046 ·
hashicorp/terraform-provider-azurerm · GitHub
I am creating a AZURE DNS zone & NS records using a terraform script.
resource "azurerm_resource_group" "dns_management" {
name = "dns-managment"
location = "West US"
}
resource "azurerm_dns_zone" "mydomaincom" {
name = "mdomain.com"
resource_group_name = "${azurerm_resource_group.dns_management.name}"
}
resource "azurerm_dns_a_record" "projectmydomain" {
name = "project"
zone_name = "${azurerm_dns_zone.mydomaincom.name}"
resource_group_name = "${azurerm_resource_group.dns_management.name}"
ttl = 300
records = ["127.0.0.1"]
}
I want to copy these AZURE NS record to a gcp cloud DNS managed zone. I am trying to do that with Gcloud commands using terraform script. How to use Gcloud commands with Terraform?
I am deploying AKS through terraform.
It's working great, but I would like to also enable identity on the VMSS object in order to allow pod level managed identity access (mostly grab keys from key vaults).
I can manually do this by going to the auto-created VMSS object that Azure creates once launching the AKS cluster.
However, I do not see an option for this in the terraform resource.
Has anyone ran into this and found a way to pull it off?
My deployment code is like this:
resource "azurerm_kubernetes_cluster" "main" {
name = "myaks"
location = "centralus"
resource_group_name = "myrg"
dns_prefix = "myaks"
node_resource_group = "aksmanagedrg"
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_B2ms"
vnet_subnet_id = "myakssubnetid"
os_disk_size_gb = 128
}
identity {
type = "SystemAssigned"
}
addon_profile {
aci_connector_linux {
enabled = false
}
azure_policy {
enabled = false
}
http_application_routing {
enabled = false
}
kube_dashboard {
enabled = true
}
oms_agent {
enabled = false
}
}
network_profile {
network_plugin = "azure"
load_balancer_sku = "standard"
}
}
Thanks!
It seems you're looking for the pod-managed identities in Azure Kubernetes Service. If so, then, unfortunately, Terraform seems does not support to configure the property. When you follow the article above to configure the pod-managed identities, then you can see the pod identity profile like this:
And there is no such option for you to configure it. But instead, you can run the Azure CLI in the Terraform via the null_resource and provisioner local-exec and here is an example:
resource "null_resource" "aks_update" {
provisioner "local-exec" {
command = "az aks update --resource-group ${azurerm_resource_group.aks.name} --name ${azurerm_kubernetes_cluster.aks.name} --enable-pod-identity"
}
}
resource "null_resource" "aks_add_poidentity" {
provisioner "local-exec" {
command = "az aks pod-identity add --resource-group ${azurerm_resource_group.aks.name} --cluster-name ${azurerm_kubernetes_cluster.aks.name} --namespace ${var.pod_identity_namespace} --name ${azurerm_user_assigned_identity.aks.name} --identity-resource-id ${azurerm_user_assigned_identity.aks.id}"
}
}
This could be a way to enable the identity in the pods level for the AKS.