Deploy azure function using terraform - azure

I have an example how to deploy azure function using terraform. But, unfortunately, it deploys only zip package. Is there are any other way to do it? How can I deploy multiple packages into one function? How can I configure proxy using terraform?
resource "azurerm_function_app" "azure_function_scenario1_hop2" {
name = "scenario1-hop2-azure-function"
location = "${var.location}"
resource_group_name = "${var.resource_group_name}"
app_service_plan_id = "${var.app_service_plan_id}"
storage_connection_string = "${var.storage_connection_string}"
app_settings {
APPINSIGHTS_INSTRUMENTATIONKEY = "${var.instrumentation_key}"
HASH = "${base64sha256(file("./../bin/scenario1_hop2_node.zip"))}"
WEBSITE_USE_ZIP = "https://github.com/lmolotii/azure-functions-playgroud/raw/master/scenario1_hop2_node.zip"
}
}

As of version 3.0 of the azurerm provider, you can deploy Functions using Terraform. You just need the azurerm_function_app_function resource as is documented here: https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/function_app_function

Related

AzureRm 3.33.0 version create app service conditionally on basis of input var?

Have developed a common module to create azure app service plan and app service for projects. In azurerm version 2 setting kind variable in app service plan resource with value linux or windows.
But with azurerm 3 upgrade there is one app service plan resource and 2 app services resources. Is there anyway to decide which resource be created depending on value of input variable apart from using count like below . Script Used
resource "azurerm_service_plan" "app_service_plan" {
name = local.name
location = var.location
resource_group_name = var.resource_group_name
tags = var.tags
os_type = var.os_type}
for windows
resource "azurerm_windows_web_app" "app_service" {
name = local.name
location = var.location
resource_group_name = var.resource_group_name
service_plan_id = var.app_service_plan_id
count = var.kind == "Windows" ?1:0}
for linux
resource "azurerm_linux_web_app" "app_service" {
name = local.name
location = var.location
resource_group_name = var.resource_group_name
service_plan_id = var.app_service_plan_id
count = var.kind == "Linux" ?1:0}
Is there any way to decide which resource be created depending on value of input variable apart from using count
Yes, there is another way to create a web app (resource) depends on the value of input variable (var:kind).
Using depends_on block:
depends_on = [var.kind == "windows" ? azurerm_windows_web_app : azurerm_linux_web_app]
Or you can also use terraform modules.
Reference: Sample terraform web app deployment module "app service" if required.
&
Refer SO worked by me in bicep for other approach.

Unable to create terraform backend - Variables not allowed

I'm trying to create a terraform backend in my TF script. The problem is that Im getting errors that the variables are not allowed.
Here is my code:
# Configure the Azure provider
provider "azurerm" {
version = "~> 2.0"
}
# Create an Azure resource group
resource "azurerm_resource_group" "example" {
name = "RG-TERRAFORM-BACKEND"
location = "$var.location"
}
# Create an Azure storage account
resource "azurerm_storage_account" "example" {
name = "$local.backendstoragename"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
tags = "$var.tags"
}
# Create an Azure storage container
resource "azurerm_storage_container" "example" {
name = "example"
resource_group_name = azurerm_resource_group.example.name
storage_account_name = azurerm_storage_account.example.name
container_access_type = "private"
}
# Create a Terraform backend configuration
resource "azurerm_terraform_backend_configuration" "example" {
resource_group_name = azurerm_resource_group.example.name
storage_account_name = azurerm_storage_account.example.name
container_name = azurerm_storage_container.example.name
key = "terraform.tfstate"
}
# Use the backend configuration to configure the Terraform backend
terraform {
backend "azurerm" {
resource_group_name = azurerm_terraform_backend_configuration.example.resource_group_name
storage_account_name = azurerm_terraform_backend_configuration.example.storage_account_name
container_name = azurerm_terraform_backend_configuration.example.container_name
key = azurerm_terraform_backend_configuration.example.key
}
}
What am I doing wrong? All of a sudden Terraform init is giving me the following errors:
Error: Variables not allowed
│
│ on main.tf line 65, in terraform:
│ 65: key = azurerm_terraform_backend_configuration.example.key
│
│ Variables may not be used here.
╵
I get the above error for ALL lines.
What am I doing wrong?
I tried to refactor the
azurerm_terraform_backend_configuration.example.container_name
as an interpolation - i.e. "$.." - but that didn't get accepted.
Has anything changed in Terraform? This wasn't the case a few years ago.
I have not found this resource azurerm_terraform_backend_configuration in any of the terraform-provider-azurerm documentation.
Check this URL for search results.
https://github.com/hashicorp/terraform-provider-azurerm/search?q=azurerm_terraform_backend_configuration
I am not even aware of the resource azurerm_terraform_backend_configuration but As of now, terraform-provider-azurerm does not support variables in the backend configuration.
Official documentation on Azurerm Backend
and what you are trying here is creating a Chicken-Egg problem (if I Ignore "azurerm_terraform_backend_configuration"). The initialization of terraform code needs a remote backend and the remote backend requires not just initialization but also terraform apply to the resources which are not possible.
The following are two possible solutions.
1: Create the resources required by the backend manually on the portal and then use them in your backend config. ( values in spite of any data source or variables)
2: Create the resources with the local backend and then migrate the local backend config to the remote backend.
Step 2.1: Create backend resources with local backend initially.
Provider Config
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.37.0"
}
}
required_version = ">= 1.1.0"
}
provider "azurerm" {
features {}
}
Backend resources
locals {
backendstoragename = "stastackoverflow001"
}
# variable defintions
variable "tags" {
type = map(string)
description = "(optional) Tags attached to resources"
default = {
used_case = "stastackoverflow"
}
}
# Create an Azure resource group
resource "azurerm_resource_group" "stackoverflow" {
name = "RG-TERRAFORM-BACKEND-STACKOVERFLOW"
location = "West Europe"
}
# Create an Azure storage account
resource "azurerm_storage_account" "stackoverflow" {
name = local.backendstoragename ## or "${local.backendstoragename}" but better is local.backendstoragename
location = azurerm_resource_group.stackoverflow.location
resource_group_name = azurerm_resource_group.stackoverflow.name
account_tier = "Standard"
account_replication_type = "LRS"
tags = var.tags ## or "${var.tags}" but better is var.tags
}
# Create an Azure storage container
resource "azurerm_storage_container" "stackoverflow" {
name = "stackoverflow"
storage_account_name = azurerm_storage_account.stackoverflow.name
container_access_type = "private"
}
Step 2.2: Apply the code with local backend.
terraform init
terraform plan # to view the plan
terraform apply -auto-approve # ignore `-auto-approve` if not desired auto approval on apply.
After applying you will get the message:
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
Step 2.3: Update the backend configuration from local to remote.
Provider Config
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.37.0"
}
}
required_version = ">= 1.1.0"
## Add remote backend config.
backend "azurerm" {
resource_group_name = "RG-TERRAFORM-BACKEND-STACKOVERFLOW"
storage_account_name = "stastackoverflow001"
container_name = "stackoverflow"
key = "terraformstate"
}
}
Re-Initialize the terraform.
After adding your remote backend run ``terraform init -reconfigurecommand and then typeyes` to migrate your local backend to remote backend.
➜ variables_in_azurerm_backend git:(main) ✗ terraform init -reconfigure <aws:sre>
Initializing the backend...
Do you want to copy existing state to the new backend?
Pre-existing state was found while migrating the previous "local" backend to the
newly configured "azurerm" backend. No existing state was found in the newly
configured "azurerm" backend. Do you want to copy this state to the new "azurerm"
backend? Enter "yes" to copy and "no" to start with an empty state.
Enter a value: yes
Successfully configured the backend "azurerm"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/azurerm from the dependency lock file
- Using previously-installed hashicorp/azurerm v3.37.0
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Now terraform should use the remote backend configured and also will be able to manage the resources created in the steps {2.1 && 2.2}. You can verify this by running terraform plan command and it should give No changes message.
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are
needed.
One more Side Note: Version constraints inside provider configuration blocks are deprecated and will be removed in a future version of Terraform
Special Consideraions: Use a different container key and directory for your other infrastructure terraform configurations to avoid accidental destruction of the storage account used for the backend config.

How to deploy a Windows VM with Terraform Azure CAF?

I want to deploy a Windows VM with Azure Cloud Adoption Framework (CAF) using Terraform. In the example of configuration.tfvars, all the configuration is done.But I cannot find the correct terraform code to deploy this tfvars configuration.
The windows vm module is here.
So far, i have written the code below:
module "caf_virtual_machine" {
source = "aztfmod/caf/azurerm//modules/compute/virtual_machine"
version = "5.0.0"
# belows are the 7 required variables
base_tags = var.tags
client_config =
global_settings = var.global_settings
location = var.location
resource_group_name = var.resource_group_name
settings =
vnets = var.vnets
}
So the vnets, global_settings, resource_group_name variables already exists in the configuration.tfvars. I have added tags and location variables to the configuration.tfvars.
But what should i enter to settings and client_config variables?
The virtual machine is a private module. You should use it by calling the base CAF module.
The Readme of the terraform registry explains how to leverage the core CAF module - https://registry.terraform.io/modules/aztfmod/caf/azurerm/latest/submodules/virtual_machine
Source code of an example:
https://github.com/aztfmod/terraform-azurerm-caf/tree/master/examples/compute/virtual_machine/211-vm-bastion-winrm-agents/registry
You have a library of configuration files examples showing how to deploy virtual machines
https://github.com/aztfmod/terraform-azurerm-caf/tree/master/examples/compute/virtual_machine
module "caf" {
source = "aztfmod/caf/azurerm"
version = "5.0.0"
global_settings = var.global_settings
tags = var.tags
resource_groups = var.resource_groups
storage_accounts = var.storage_accounts
keyvaults = var.keyvaults
managed_identities = var.managed_identities
role_mapping = var.role_mapping
diagnostics = {
# Get the diagnostics settings of services to create
diagnostic_log_analytics = var.diagnostic_log_analytics
diagnostic_storage_accounts = var.diagnostic_storage_accounts
}
compute = {
virtual_machines = var.virtual_machines
}
networking = {
vnets = var.vnets
network_security_group_definition = var.network_security_group_definition
public_ip_addresses = var.public_ip_addresses
}
security = {
dynamic_keyvault_secrets = var.dynamic_keyvault_secrets
}
}
Note - it is recommended to leverage the VScode devcontainer provided in the source repository to execute the terraform deployment. The devcontainer includes the tooling required to deploy Azure solutions.

create compute instance in azure machine learning workspace terraform

I am using terraform module machine_learning_workspace to create machine learning workspace in Azure. Relevent code for that is as follows:
resource "azurerm_machine_learning_workspace" "example" {
name = "${var.ML_workspace_name}"
location = data.azurerm_resource_group.resource_group.location
resource_group_name = data.azurerm_resource_group.resource_group.name
application_insights_id = azurerm_application_insights.azurem_application_insight.id
key_vault_id = azurerm_key_vault.key_vault.id
storage_account_id = azurerm_storage_account.storage_account.id
identity {
type = "SystemAssigned"
}
}
This is working fine.
Now I want to create a compute instance within this workspace. terraform does not support this or I am not able to find it in docs.
are there other ways to automate its creation?

How Do I Set The Node Version In An Azure Web App?

I have created an Azure Web App using terraform but it has the wrong version of NodeJS in it.
resource "azurerm_app_service_plan" "app-plan" {
name = "${var.prefix}-app-plan"
resource_group_name = var.resource_group_name
location = var.resource_group_location
sku {
tier = "Free"
size = "F1"
}
}
#azurerm_app_service doesn't support creating Node.JS 8.10 apps
#https://github.com/terraform-providers/terraform-provider-azurerm/issues/4144
resource "azurerm_app_service" "app-service" {
name = "${var.prefix}-app"
resource_group_name = var.resource_group_name
location = var.resource_group_location
app_service_plan_id = azurerm_app_service_plan.app-plan.id
}
I have tried updating the configuration using the rest api
{
"properties": {
"nodeVersion": "8.10"
}
}
and also updating the application settings using the rest api
{
"properties": {
"WEBSITE_NODE_DEFAULT_VERSION": "8.10"
}
}
However, when I run the Console it still says node --version v0.10.40
When I run env it looks like the PATH variable is incorrect.
Node 8.10 does exist on the machine at D:\Program Files (x86)\nodejs\8.10.0
How can I update the path from the rest api?
Are there any alternatives?
My preferences are terraform > az cl > rest api
Note:
Bear in mind that when I create the web app in the portal, selecting Node 8.10 forces me to choose Windows as the O/S.
Under site_config, linux_fx_version should be set to "NODE|8.10"
I have gotten it to work with node 10.14, using:
site_config {
linux_fx_version = "NODE|10.14"
}
You can also see different examples of azure web apps over at:
https://github.com/terraform-providers/terraform-provider-azurerm/tree/master/examples/app-service
In the portal it specifies Node 8.10 as a Runtime stack.
The az cli specifies 8.10 as a runtime:
az webapp list-runtimes|grep "8.10"
"node|8.10",
However, as you can see in the question, the version installed is 8.10.0.
If we set this in the application settings with terraform this (unintuitively) sets the correct node version:
resource "azurerm_app_service" "app-service" {
name = "${var.prefix}-app"
resource_group_name = var.resource_group_name
location = var.resource_group_location
app_service_plan_id = azurerm_app_service_plan.app-plan.id
app_settings = {
#The portal and az cli list "8.10" as the supported version.
#"8.10" doesn't work here!
#"8.10.0" is the version installed in D:\Program Files (x86)\nodejs
"WEBSITE_NODE_DEFAULT_VERSION" = "8.10.0"
}
}

Resources