I want to deploy a Windows VM with Azure Cloud Adoption Framework (CAF) using Terraform. In the example of configuration.tfvars, all the configuration is done.But I cannot find the correct terraform code to deploy this tfvars configuration.
The windows vm module is here.
So far, i have written the code below:
module "caf_virtual_machine" {
source = "aztfmod/caf/azurerm//modules/compute/virtual_machine"
version = "5.0.0"
# belows are the 7 required variables
base_tags = var.tags
client_config =
global_settings = var.global_settings
location = var.location
resource_group_name = var.resource_group_name
settings =
vnets = var.vnets
}
So the vnets, global_settings, resource_group_name variables already exists in the configuration.tfvars. I have added tags and location variables to the configuration.tfvars.
But what should i enter to settings and client_config variables?
The virtual machine is a private module. You should use it by calling the base CAF module.
The Readme of the terraform registry explains how to leverage the core CAF module - https://registry.terraform.io/modules/aztfmod/caf/azurerm/latest/submodules/virtual_machine
Source code of an example:
https://github.com/aztfmod/terraform-azurerm-caf/tree/master/examples/compute/virtual_machine/211-vm-bastion-winrm-agents/registry
You have a library of configuration files examples showing how to deploy virtual machines
https://github.com/aztfmod/terraform-azurerm-caf/tree/master/examples/compute/virtual_machine
module "caf" {
source = "aztfmod/caf/azurerm"
version = "5.0.0"
global_settings = var.global_settings
tags = var.tags
resource_groups = var.resource_groups
storage_accounts = var.storage_accounts
keyvaults = var.keyvaults
managed_identities = var.managed_identities
role_mapping = var.role_mapping
diagnostics = {
# Get the diagnostics settings of services to create
diagnostic_log_analytics = var.diagnostic_log_analytics
diagnostic_storage_accounts = var.diagnostic_storage_accounts
}
compute = {
virtual_machines = var.virtual_machines
}
networking = {
vnets = var.vnets
network_security_group_definition = var.network_security_group_definition
public_ip_addresses = var.public_ip_addresses
}
security = {
dynamic_keyvault_secrets = var.dynamic_keyvault_secrets
}
}
Note - it is recommended to leverage the VScode devcontainer provided in the source repository to execute the terraform deployment. The devcontainer includes the tooling required to deploy Azure solutions.
Related
By now I am researching, how I could move my infrastructure to Azure using terraform.
Where I am stuck by now is the following scenario:
I have a docker image, that uses the copy command to add some files into it. One solution I could think of is that I first build this Image using docker, push it to azure registry and then using it as container inside the azrerm_container_group resource.
Using the kreuzwerker/docker provider I could also just use the dockerfile, but could not find a equivalent solution in Azure.
So, what would be your suggestions?
Thanks in Advance!
the base for azure is:
resource "azurerm_container_group" "containers" {
name = "xxx"
location = xxx
resource_group_name = xxx
ip_address_type = "Public"
os_type = "Linux"
container {
name = "xxx"
image = "xx/xx:latest"
cpu = "1"
memory = "1.5"
I tried in my environment with below code.
Check the name of the image given while pushing docker image to the registry.
Usually it will be pushed to dockerhub.
To push an image to Microsoft Container Registry add or prepend mcr.microsoft.com/ to the name of your container registry.
Push to the microsoft container registry
Code:
provider "docker" {
# host = azurerm_container_registry.acr.login_server
registry_auth {
address = azurerm_container_registry.acr.login_server
username = azurerm_container_registry.acr.admin_username
password = azurerm_container_registry.acr.admin_password
}
}
resource "docker_registry_image" "helloworld" {
name = "mcr.microsoft.com/azuredocs/aci-helloworld:latest"
build {
context = "${path.cwd}/absolutePathToContextFolder"
dockerfile = "Dockerfile"
}
}
and then try to store/use in the container group.
resource "azurerm_container_group" "example" {
name = "kaexample-container"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
ip_address_type = "Public"
dns_name_label = "kavyaaci-label"
os_type = "Linux"
container {
name = "hello-world"
image = "mcr.microsoft.com/azuredocs/aci-helloworld:latest"
cpu = "0.5"
memory = "1.5"
ports {
port = 443
protocol = "TCP"
}
}
tags = {
environment = "testing"
}
}
providers.tf
terraform {
required_providers {
azapi = {
source = "azure/azapi"
version = "=0.1.0"
}
azurerm = {
source = "hashicorp/azurerm"
version = "=3.0.2"
}
docker = {
source = "kreuzwerker/docker"
version = ">= 2.16.0"
}
}
Code executed:
terraform apply
Images in container:
Reference: Use terraform to push docker image to azure container registry - Stack Overflow
When I configure Azure Monitoring using the OMS solution for VMs with this answer Enable Azure Monitor for existing Virtual machines using terraform, I notice that this feature is being deprecated and Azure prefers you move to the new monitoring solution (Not using the log analytics agent).
Azure allows me to configure VM monitoring using this GUI, but I would like to do it using terraform.
Is there a particular setup I have to use in terraform to achieve this? (I am using a Linux VM btw)
Yes, that is correct. The omsagent has been marked as legacy and Azure now has a new monitoring agent called "Azure Monitor agent" . The solution given below is for Linux, Please check the Official Terraform docs for Windows machines.
We need three things to do the equal UI counterpart in Terraform.
azurerm_log_analytics_workspace
azurerm_monitor_data_collection_rule
azurerm_monitor_data_collection_rule_association
Below is the example code:
data "azurerm_virtual_machine" "vm" {
name = var.vm_name
resource_group_name = var.az_resource_group_name
}
resource "azurerm_log_analytics_workspace" "workspace" {
name = "${var.project}-${var.env}-log-analytics"
location = var.az_location
resource_group_name = var.az_resource_group_name
sku = "PerGB2018"
retention_in_days = 30
}
resource "azurerm_virtual_machine_extension" "AzureMonitorLinuxAgent" {
name = "AzureMonitorLinuxAgent"
publisher = "Microsoft.Azure.Monitor"
type = "AzureMonitorLinuxAgent"
type_handler_version = 1.0
auto_upgrade_minor_version = "true"
virtual_machine_id = data.azurerm_virtual_machine.vm.id
}
resource "azurerm_monitor_data_collection_rule" "example" {
name = "example-rules"
resource_group_name = var.az_resource_group_name
location = var.az_location
destinations {
log_analytics {
workspace_resource_id = azurerm_log_analytics_workspace.workspace.id
name = "test-destination-log"
}
azure_monitor_metrics {
name = "test-destination-metrics"
}
}
data_flow {
streams = ["Microsoft-InsightsMetrics"]
destinations = ["test-destination-log"]
}
data_sources {
performance_counter {
streams = ["Microsoft-InsightsMetrics"]
sampling_frequency_in_seconds = 60
counter_specifiers = ["\\VmInsights\\DetailedMetrics"]
name = "VMInsightsPerfCounters"
}
}
}
# associate to a Data Collection Rule
resource "azurerm_monitor_data_collection_rule_association" "example1" {
name = "example1-dcra"
target_resource_id = data.azurerm_virtual_machine.vm.id
data_collection_rule_id = azurerm_monitor_data_collection_rule.example.id
description = "example"
}
Reference:
monitor_data_collection_rule
monitor_data_collection_rule_association
As a very first step of my release process I run the following terraform code
resource "azurerm_automation_account" "automation_account" {
for_each = data.terraform_remote_state.pod_bootstrap.outputs.ops_rg
name = "${local.automation_account_prefix}-${each.key}"
location = each.key
resource_group_name = each.value.name
sku_name = "Basic"
tags = {
environment = "development"
}
}
The automation accounts created as expected and I can see those in Azure portal.
I also have terraform code that creates a couple of windows VMs,each VM creation accompained by the following
resource "azurerm_virtual_machine_extension" "dsc" {
name = "DevOpsDSC"
virtual_machine_id = var.vm_id
publisher = "Microsoft.Powershell"
type = "DSC"
type_handler_version = "2.83"
settings = <<SETTINGS_JSON
{
"configurationArguments": {
"RegistrationUrl": "${var.dsc_server_endpoint}",
"NodeConfigurationName": "${var.dsc_config}",
"ConfigurationMode": "${var.dsc_mode}",
"ConfigurationModeFrequencyMins": 15,
"RefreshFrequencyMins": 30,
"RebootNodeIfNeeded": false,
"ActionAfterReboot": "continueConfiguration",
"AllowModuleOverwrite": true
}
}
SETTINGS_JSON
protected_settings = <<PROTECTED_SETTINGS_JSON
{
"configurationArguments": {
"RegistrationKey": {
"UserName": "PLACEHOLDER_DONOTUSE",
"Password": "${var.dsc_primary_access_key}"
}
}
}
PROTECTED_SETTINGS_JSON
}
The result is the following
So VM extension is created for each VM and the status says that provisioning succeeded.
For the next step I run the following terraform code
resource "azurerm_automation_dsc_configuration" "iswebserver" {
for_each = data.terraform_remote_state.pod_bootstrap.outputs.ops_rg
name = "iswebserver"
resource_group_name = each.value.name
automation_account_name = data.terraform_remote_state.ops.outputs.automation_account[each.key].name
location = each.key
content_embedded = "configuration iswebserver {}"
}
resource "azurerm_automation_dsc_nodeconfiguration" "iswebserver" {
for_each = data.terraform_remote_state.pod_bootstrap.outputs.ops_rg
name = "iswebserver.localhost"
resource_group_name = each.value.name
automation_account_name = data.terraform_remote_state.ops.outputs.automation_account[each.key].name
depends_on = [azurerm_automation_dsc_configuration.iswebserver]
content_embedded = file("${path.cwd}/iswebserver.mof")
}
The mof file content is the following
/*
#TargetNode='IsWebServer'
#GeneratedBy=P120bd0
#GenerationDate=02/25/2021 17:33:16
#GenerationHost=D-MJ05UA54
*/
instance of MSFT_RoleResource as $MSFT_RoleResource1ref
{
ResourceID = "[WindowsFeature]IIS";
IncludeAllSubFeature = True;
Ensure = "Present";
SourceInfo = "D:\\DSC\\testconfig.ps1::5::9::WindowsFeature";
Name = "Web-Server";
ModuleName = "PsDesiredStateConfiguration";
ModuleVersion = "1.0";
ConfigurationName = "TestConfig";
};
instance of OMI_ConfigurationDocument
{
Version="2.0.0";
MinimumCompatibleVersion = "1.0.0";
CompatibleVersionAdditionalProperties= {"Omi_BaseResource:ConfigurationName"};
Author="P120bd0";
GenerationDate="02/25/2021 17:33:16";
GenerationHost="D-MJ05UA54";
Name="TestConfig";
};
After running the code I have got the following result
The configuration is created as expected, clicking on configuration entry in UI grid, leads to the following
Meaning that node configuration is created as well. My expectation was that for each VM I will see the Node configured to run configuration provided in mof file but Nodes UI shows empty Nodes
So I was trying to configure node manually to connect all peaces together
and that fails with the following
So I am totally confisued. On the one hand there's azurerm_virtual_machine_extension that allows to create extension and bind it to the automation account. In addition there are azurerm_automation_dsc_configuration and azurerm_automation_dsc_nodeconfiguration that allows to create configuration and node configuration. But the bottom line is that you cannot connect all those dots to be able to create node.
Just to confirm that configuration is valid, I create additional vm without using azurerm_virtual_machine_extension and I was able succesfully add this MV to created node configuration
The problem was in azurerm_virtual_machine_extension dsc_configuration parameter. The value needs to be the same as name property of the azurerm_automation_dsc_nodeconfiguration resource.
I'm brand new to terraform, and I'm utilizing terragrunt to help me get things rolling. I have a decent amount of infrastructure to migrate and get set up w/ terraform, but I'm getting my feet underneath me first. We have multiple VPC's in different regions with a lot of the same security group rules used i.e.(web, db, etc..) that I want to replicate across each region.
I have a simple example of how I currently have an EC2 module setup to recreate the security group rules and was wondering if there's a better way to organize this code so I don't have to create a new module for the same SG rule for each region? i.e. some smart way to utilize lists for my vpc's, providers, etc...
since this is just one SG rule across two regions, I'm trying to avoid this growing ugly as we scale up to even more regions and I input multiple SG rules
My state is currently being stored in S3 and in this setup I pull the state so I can access the VPC outputs from another module I used to create the VPC's
terraform {
backend "s3" {}
}
provider "aws" {
version = "~> 1.31.0"
region = "${var.region}"
profile = "${var.profile}"
}
provider "aws" {
version = "~> 1.31.0"
alias = "us-west-1"
region = "us-west-1"
profile = "${var.profile}"
}
#################################
# Data sources to get VPC details
#################################
data "terraform_remote_state" "vpc" {
backend = "s3"
config {
bucket = "${var.vpc_remote_state_bucket}"
key = "${var.vpc_remote_state_key}"
region = "${var.region}"
profile = "${var.profile}"
}
}
#####################
# Security group rule
#####################
module "east1_vpc_web_server_sg" {
source = "terraform-aws-modules/security-group/aws"
version = "2.5.0"
name = "web-server"
description = "Security group for web-servers with HTTP ports open within the VPC"
vpc_id = "${data.terraform_remote_state.vpc.us_east_vpc1_id}"
# Allow VPC public subnets to talk to each other for API's
ingress_cidr_blocks = ["${data.terraform_remote_state.vpc.us_east_vpc1_public_subnets_cidr_blocks}"]
ingress_rules = ["https-443-tcp", "http-80-tcp"]
# List of maps
ingress_with_cidr_blocks = "${var.web_server_ingress_with_cidr_blocks}"
# Allow engress all protocols to outside
egress_rules = ["all-all"]
tags = {
Terraform = "true"
Environment = "${var.environment}"
}
}
module "west1_vpc_web_server_sg" {
source = "terraform-aws-modules/security-group/aws"
version = "2.5.0"
providers {
aws = "aws.us-west-1"
}
name = "web-server"
description = "Security group for web-servers with HTTP ports open within the VPC"
vpc_id = "${data.terraform_remote_state.vpc.us_west_vpc1_id}"
# Allow VPC public subnets to talk to each other for API's
ingress_cidr_blocks = ["${data.terraform_remote_state.vpc.us_west_vpc1_public_subnets_cidr_blocks}"]
ingress_rules = ["https-443-tcp", "http-80-tcp"]
ingress_with_cidr_blocks = "${var.web_server_ingress_with_cidr_blocks}"
# Allow engress all protocols to outside
egress_rules = ["all-all"]
tags = {
Terraform = "true"
Environment = "${var.environment}"
}
}
Your current setup uses two times the same module differing in the provider. You can pass down multiple providers to the module (see the documentation). Then, within the module, you can use the same variables you specified once in your main document to create all the instances you need.
However, since you are using one separate provider for each resource type, you have to have at least some code duplication down the line.
Your code could then look something like this
module "vpc_web_server_sg" {
source = "terraform-aws-modules/security-group/aws"
version = "2.5.0"
providers {
aws.main = "aws"
aws.secondary = "aws.us-west-1"
}
name = "web-server"
description = "Security group for web-servers with HTTP ports open within the VPC"
vpc_id = "${data.terraform_remote_state.vpc.us_west_vpc1_id}"
# Allow VPC public subnets to talk to each other for API's
ingress_cidr_blocks = ["${data.terraform_remote_state.vpc.us_west_vpc1_public_subnets_cidr_blocks}"]
ingress_rules = ["https-443-tcp", "http-80-tcp"]
ingress_with_cidr_blocks = "${var.web_server_ingress_with_cidr_blocks}"
# Allow engress all protocols to outside
egress_rules = ["all-all"]
tags = {
Terraform = "true"
Environment = "${var.environment}"
}
}
Inside your module you can then use the main and secondary provider to deploy all your required resources.
I have an example how to deploy azure function using terraform. But, unfortunately, it deploys only zip package. Is there are any other way to do it? How can I deploy multiple packages into one function? How can I configure proxy using terraform?
resource "azurerm_function_app" "azure_function_scenario1_hop2" {
name = "scenario1-hop2-azure-function"
location = "${var.location}"
resource_group_name = "${var.resource_group_name}"
app_service_plan_id = "${var.app_service_plan_id}"
storage_connection_string = "${var.storage_connection_string}"
app_settings {
APPINSIGHTS_INSTRUMENTATIONKEY = "${var.instrumentation_key}"
HASH = "${base64sha256(file("./../bin/scenario1_hop2_node.zip"))}"
WEBSITE_USE_ZIP = "https://github.com/lmolotii/azure-functions-playgroud/raw/master/scenario1_hop2_node.zip"
}
}
As of version 3.0 of the azurerm provider, you can deploy Functions using Terraform. You just need the azurerm_function_app_function resource as is documented here: https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/function_app_function