I am trying to execute terraform scripts through azure devops. I am not able to apply and validate through different tasks, though terraform plan is successful terraform apply is failing with
##[error]TypeError: Cannot read property 'includes' of null
Here is the terraform tasks which I am using. I have tried with two different tasks
1.
- task: ms-devlabs.custom-terraform-tasks.custom-terraform-release-task.TerraformTaskV2#2
displayName: 'Terraform : apply -auto-approve'
inputs:
command: apply
workingDirectory: '$(System.DefaultWorkingDirectory)/Terraform'
commandOptions: '-auto-approve'
environmentServiceNameAzureRM: 'ps-vs-sc'
backendAzureRmResourceGroupName: '$(rgname)'
backendAzureRmStorageAccountName: $(strname)
backendAzureRmContainerName: $(tfContainer)
backendAzureRmKey: '$(storagekey)'
- task: TerraformTaskV2#2
inputs:
provider: 'azurerm'
command: 'apply'
workingDirectory: '$(System.DefaultWorkingDirectory)/Terraform'
commandOptions: '--auto-approve'
environmentServiceNameAzureRM: 'ps-vs1-sc'
Here is my terraform file
provider "azurerm" {
features {}
}
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "2.74.0"
}
}
}
data "azurerm_api_management" "example" {
name = var.apimName
resource_group_name = var.rgName
}
resource "azurerm_api_management_api" "example" {
name = var.apName
resource_group_name = var.rgName
api_management_name = var.apimname
revision = "1"
display_name = "Example API1"
path = "example1"
protocols = ["https"]
service_url = "http://123.0.0.0:8000"
subscription_required = true
import {
content_format = "openapi+json"
content_value = #{storageaccountlink}#
}
The terraform file looks fine, there is no issue with it. Can you check if you are using the Azure Service Principal method.
A Service Principal is considered a good practice for DevOps within your CI/CD pipeline. It is used as an identity to authenticate you within your Azure Subscription to allow you to deploy the relevant Terraform code.
You can follow this document for the detailed demo for Deploying Terraform using Azure DevOps.
If you have already followed the above steps and still facing the same issue then the problem is with the the 'Configuration directory' setting in the Terraform 'Validate and Apply' Release Pipeline step. Changing it to the path containing the build artifacts will fix the issue. Also check if the workingDirectory which you have provided is correct.
A similar but not same problem was also raised here, check if it solves the problem.
I have added the entire pipeline details here just for the reference - https://gist.github.com/PrakashRajanSakthivel/e6d8e03044a51d74803499aca75a258c
Related
I have a question regarding the Azure SQL database.
Is there anyway that we can create a table in Azure SQL database through a script.
Let's suppose that I provision it through terraform and I want to create a table as well while it is being provisioned through terraform.
So it can be fully automated.
Or is there any other option that we can create a table through github actions? because our code it on github and we are using github actions to provision our all azure resources.
resource "azurerm_mssql_server" "data_sql_server" {
name = "data-sql-server-${var.env}"
resource_group_name = var.resource_group_name
location = var.location
version = "12.0"
administrator_login = var.mssql_db_username
administrator_login_password = var.mssql_db_password
minimum_tls_version = "1.2"
public_network_access_enabled = true
tags = var.common_tags
}
resource "azurerm_mssql_database" "data_sql_db" {
name = "data-sql-db-${var.env}"
server_id = azurerm_mssql_server.data_sql_server.id
collation = "SQL_Latin1_General_CP1_CI_AS"
license_type = "LicenseIncluded"
max_size_gb = 250
sku_name = "S1"
tags = var.common_tags
}
so here is the terraform code. What I want is, whenever it provisions the managed SQL server and managed database, it creates a table in the DB as well. So i don't have to create that table manually.
so answer is
you need to create a file .github/workflows/sql-deploy.yml
add this code
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- uses: azure/login#v1 # Azure login required to add a temporary firewall rule
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- uses: azure/sql-action#v2
with:
connection-string: ${{ secrets.AZURE_SQL_CONNECTION_STRING }}
path: './script.sql'
and now you need to create 2 files in your github directory.
first file Database.sqlproj and here is the code
<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="Build">
<Sdk Name="Microsoft.Build.Sql" Version="0.1.3-preview" />
<PropertyGroup>
<Name>reactions</Name>
<DSP>Microsoft.Data.Tools.Schema.Sql.SqlAzureV12DatabaseSchemaProvider</DSP>
<ModelCollation>1033, CI</ModelCollation>
</PropertyGroup>
</Project>
second file script.sql and here is the code
CREATE TABLE [dbo].[Product](
[ProductID] [int] IDENTITY(1,1) PRIMARY KEY NOT NULL,
[Name] [nvarchar](100) NOT NULL,
[ProductNumber] [nvarchar](25) NOT NULL,
[Color] [nvarchar](15) NULL,
[StandardCost] [money] NOT NULL,
[ListPrice] [money] NOT NULL,
[Size] [nvarchar](5) NULL,
[Weight] [decimal](8, 2) NULL,
[ProductCategoryID] [int] NULL,
[ProductModelID] [int] NULL,
[ModifiedDate] [datetime] NOT NULL
)
and here is, how you can make the connection string and save it into the github secrets.
Place the connection string from the Azure Portal in GitHub secrets as AZURE_SQL_CONNECTION_STRING. Connection string format is:
Server=<server.database.windows.net>;User ID=<user>;Password=<password>;Initial Catalog=<database>.
I'm trying to create a terraform backend in my TF script. The problem is that Im getting errors that the variables are not allowed.
Here is my code:
# Configure the Azure provider
provider "azurerm" {
version = "~> 2.0"
}
# Create an Azure resource group
resource "azurerm_resource_group" "example" {
name = "RG-TERRAFORM-BACKEND"
location = "$var.location"
}
# Create an Azure storage account
resource "azurerm_storage_account" "example" {
name = "$local.backendstoragename"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
tags = "$var.tags"
}
# Create an Azure storage container
resource "azurerm_storage_container" "example" {
name = "example"
resource_group_name = azurerm_resource_group.example.name
storage_account_name = azurerm_storage_account.example.name
container_access_type = "private"
}
# Create a Terraform backend configuration
resource "azurerm_terraform_backend_configuration" "example" {
resource_group_name = azurerm_resource_group.example.name
storage_account_name = azurerm_storage_account.example.name
container_name = azurerm_storage_container.example.name
key = "terraform.tfstate"
}
# Use the backend configuration to configure the Terraform backend
terraform {
backend "azurerm" {
resource_group_name = azurerm_terraform_backend_configuration.example.resource_group_name
storage_account_name = azurerm_terraform_backend_configuration.example.storage_account_name
container_name = azurerm_terraform_backend_configuration.example.container_name
key = azurerm_terraform_backend_configuration.example.key
}
}
What am I doing wrong? All of a sudden Terraform init is giving me the following errors:
Error: Variables not allowed
│
│ on main.tf line 65, in terraform:
│ 65: key = azurerm_terraform_backend_configuration.example.key
│
│ Variables may not be used here.
╵
I get the above error for ALL lines.
What am I doing wrong?
I tried to refactor the
azurerm_terraform_backend_configuration.example.container_name
as an interpolation - i.e. "$.." - but that didn't get accepted.
Has anything changed in Terraform? This wasn't the case a few years ago.
I have not found this resource azurerm_terraform_backend_configuration in any of the terraform-provider-azurerm documentation.
Check this URL for search results.
https://github.com/hashicorp/terraform-provider-azurerm/search?q=azurerm_terraform_backend_configuration
I am not even aware of the resource azurerm_terraform_backend_configuration but As of now, terraform-provider-azurerm does not support variables in the backend configuration.
Official documentation on Azurerm Backend
and what you are trying here is creating a Chicken-Egg problem (if I Ignore "azurerm_terraform_backend_configuration"). The initialization of terraform code needs a remote backend and the remote backend requires not just initialization but also terraform apply to the resources which are not possible.
The following are two possible solutions.
1: Create the resources required by the backend manually on the portal and then use them in your backend config. ( values in spite of any data source or variables)
2: Create the resources with the local backend and then migrate the local backend config to the remote backend.
Step 2.1: Create backend resources with local backend initially.
Provider Config
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.37.0"
}
}
required_version = ">= 1.1.0"
}
provider "azurerm" {
features {}
}
Backend resources
locals {
backendstoragename = "stastackoverflow001"
}
# variable defintions
variable "tags" {
type = map(string)
description = "(optional) Tags attached to resources"
default = {
used_case = "stastackoverflow"
}
}
# Create an Azure resource group
resource "azurerm_resource_group" "stackoverflow" {
name = "RG-TERRAFORM-BACKEND-STACKOVERFLOW"
location = "West Europe"
}
# Create an Azure storage account
resource "azurerm_storage_account" "stackoverflow" {
name = local.backendstoragename ## or "${local.backendstoragename}" but better is local.backendstoragename
location = azurerm_resource_group.stackoverflow.location
resource_group_name = azurerm_resource_group.stackoverflow.name
account_tier = "Standard"
account_replication_type = "LRS"
tags = var.tags ## or "${var.tags}" but better is var.tags
}
# Create an Azure storage container
resource "azurerm_storage_container" "stackoverflow" {
name = "stackoverflow"
storage_account_name = azurerm_storage_account.stackoverflow.name
container_access_type = "private"
}
Step 2.2: Apply the code with local backend.
terraform init
terraform plan # to view the plan
terraform apply -auto-approve # ignore `-auto-approve` if not desired auto approval on apply.
After applying you will get the message:
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
Step 2.3: Update the backend configuration from local to remote.
Provider Config
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.37.0"
}
}
required_version = ">= 1.1.0"
## Add remote backend config.
backend "azurerm" {
resource_group_name = "RG-TERRAFORM-BACKEND-STACKOVERFLOW"
storage_account_name = "stastackoverflow001"
container_name = "stackoverflow"
key = "terraformstate"
}
}
Re-Initialize the terraform.
After adding your remote backend run ``terraform init -reconfigurecommand and then typeyes` to migrate your local backend to remote backend.
➜ variables_in_azurerm_backend git:(main) ✗ terraform init -reconfigure <aws:sre>
Initializing the backend...
Do you want to copy existing state to the new backend?
Pre-existing state was found while migrating the previous "local" backend to the
newly configured "azurerm" backend. No existing state was found in the newly
configured "azurerm" backend. Do you want to copy this state to the new "azurerm"
backend? Enter "yes" to copy and "no" to start with an empty state.
Enter a value: yes
Successfully configured the backend "azurerm"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/azurerm from the dependency lock file
- Using previously-installed hashicorp/azurerm v3.37.0
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Now terraform should use the remote backend configured and also will be able to manage the resources created in the steps {2.1 && 2.2}. You can verify this by running terraform plan command and it should give No changes message.
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are
needed.
One more Side Note: Version constraints inside provider configuration blocks are deprecated and will be removed in a future version of Terraform
Special Consideraions: Use a different container key and directory for your other infrastructure terraform configurations to avoid accidental destruction of the storage account used for the backend config.
I'm attempting to deploy an aks cluster and role assignment for the system assigned managed identity that is created via terraform but I'm getting a 403 response
azurerm_role_assignment.acrpull_role: Creating...
╷
│ Error: authorization.RoleAssignmentsClient#Create: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="AuthorizationFailed" Message="The client '626eac40-c9dd-44cc-a528-3c3d3e069e85' with object id '626eac40-c9dd-44cc-a528-3c3d3e069e85' does not have authorization to perform action 'Microsoft.Authorization/roleAssignments/write' over scope '/subscriptions/7b73e02c-dbff-4eb7-9d73-e73a2a17e818/resourceGroups/myaks-rg/providers/Microsoft.ContainerRegistry/registries/aksmattcloudgurutest/providers/Microsoft.Authorization/roleAssignments/c144ad6d-946f-1898-635e-0d0d27ca2f1c' or the scope is invalid. If access was recently granted, please refresh your credentials."
│
│ with azurerm_role_assignment.acrpull_role,
│ on main.tf line 53, in resource "azurerm_role_assignment" "acrpull_role":
│ 53: resource "azurerm_role_assignment" "acrpull_role" {
│
╵
This is only occurring in an Azure Devops Pipeline. My pipeline looks like the following...
trigger:
- main
pool:
vmImage: ubuntu-latest
steps:
- task: TerraformInstaller#0
inputs:
terraformVersion: '1.0.7'
- task: TerraformCLI#0
inputs:
command: 'init'
workingDirectory: '$(System.DefaultWorkingDirectory)/Shared/Pipeline/Cluster'
backendType: 'azurerm'
backendServiceArm: 'Matt Local Service Connection'
ensureBackend: true
backendAzureRmResourceGroupName: 'tfstate'
backendAzureRmResourceGroupLocation: 'UK South'
backendAzureRmStorageAccountName: 'tfstateq7nqv'
backendAzureRmContainerName: 'tfstate'
backendAzureRmKey: 'terraform.tfstate'
allowTelemetryCollection: true
- task: TerraformCLI#0
inputs:
command: 'plan'
workingDirectory: '$(System.DefaultWorkingDirectory)/Shared/Pipeline/Cluster'
environmentServiceName: 'Matt Local Service Connection'
allowTelemetryCollection: true
- task: TerraformCLI#0
inputs:
command: 'validate'
workingDirectory: '$(System.DefaultWorkingDirectory)/Shared/Pipeline/Cluster'
allowTelemetryCollection: true
- task: TerraformCLI#0
inputs:
command: 'apply'
workingDirectory: '$(System.DefaultWorkingDirectory)/Shared/Pipeline/Cluster'
environmentServiceName: 'Matt Local Service Connection'
allowTelemetryCollection: false
I'm using the terraform tasks from here - https://marketplace.visualstudio.com/items?itemName=charleszipp.azure-pipelines-tasks-terraform
This is my terraform file
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.46.0"
}
}
}
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "TerraformCluster" {
name = "terraform-cluster"
location = "UK South"
}
resource "azurerm_kubernetes_cluster" "TerraformClusterAKS" {
name = "terraform-cluster-aks1"
location = azurerm_resource_group.TerraformCluster.location
resource_group_name = azurerm_resource_group.TerraformCluster.name
dns_prefix = "terraform-cluster-aks1"
network_profile {
network_plugin = "azure"
}
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_D2_v2"
}
identity {
type = "SystemAssigned"
}
tags = {
Environment = "Production"
}
}
data "azurerm_container_registry" "this" {
depends_on = [
azurerm_kubernetes_cluster.TerraformClusterAKS
]
provider = azurerm
name = "aksmattcloudgurutest"
resource_group_name = "myaks-rg"
}
resource "azurerm_role_assignment" "acrpull_role" {
scope = data.azurerm_container_registry.this.id
role_definition_name = "AcrPull"
principal_id = azurerm_kubernetes_cluster.TerraformClusterAKS.identity[0].principal_id
}
Where am I going wrong here?
The Service Principal in AAD associated with the your ADO Service Connection ('Matt Local Service Connection') will need to be assigned the Owner role at the scope of the resource, or above (depending on where else you will be assigning permissions). You can read details about the various roles here the two most commonly used roles are Owner and Contributor, the key difference being that Owner allows for managing role assignments.
As part of this piece of work, you should also familiarize yourself with the principle of least privilege (if you do not already know about it). How it would apply in this case would be; if the Service Principal only needs Owner at the Resource level, then don't assign it Owner at the Resource Group or Subscription Level just because that is more convenient, you can always update the scope later on but it is much harder to undo any damage (assuming a malicious or inexperienced actor) on overly permissive role assignment after it has been exploited.
I tried everything to get this existing storage service to re-connect to the Azure Devops Pipeline to enable Terraform deployments.
Attempted and did not work: Break lease on tf state, remove tf state, update lease on tfstate, inline commands in ADO via powershell and bash to Purge terraform, re-install the Terraform plugin etc. etc. etc.)
What worked:
What ended up working is to create a new storage account with a new storage container and new SAS token.
This worked to overcome the 403 forbidden error on the access of the Blob containing the TFState in ADLS from Azure Devops for Terraform deployments. This does not explain the how or the why, access controls / iam / access policies did not change. A tear down and recreate of the storage containing the TFState with the exact same settings under a difference storage account name worked.
I have terraform script which creates a azure devops prject a build definition and a service endpoint for github.
resource "azuredevops_serviceendpoint_github" "github_serviceendpoint" {
project_id = azuredevops_project.project.id
service_endpoint_name = "GitHub Service Connection"
github_service_endpoint_pat = var.GITHUB_TOKEN
}
resource "azuredevops_build_definition" "nightly_build" {
project_id = azuredevops_project.project.id
agent_pool_name = "Hosted Ubuntu 1604"
name = "Nightly Build"
path = "\\"
repository {
repo_type = "GitHub"
repo_name = "iojas/django_ci_cd"
branch_name = "master"
yml_path = "azure-pipelines.yml"
service_connection_id = azuredevops_serviceendpoint_github.github_serviceendpoint.id
}
}
This reflects all the resources in azure devops console but pushes to master branch does not trigger the build.
I have following trigger setup in azure-pipelines.yml
trigger:
- master
Now if I create the same resources using azure console a build gets triggered on every push to master.
I am not sure what I am missing here.
Within API management, I created an API that enables to call a serverless function app. Now I would like to deploy automatically this functionnality. Here are the possibilities I saw on internet :
Create and configure the api management through the portal (this is not what I call an automatic deployment)
Use Powershell command (unfortunally I am working with linux)
ARM (Azure Resource Manager): this is not easy and I did find how to create an API with Azure function app
Terraform: same as ARM, it is not clear for me how to create an API with Azure function app
If someone has an experience, links or ideas I would be very thankful.
Regards,
We are currently using Terraform for all our Azure infrastructure including API Management and I would strongly recommend it.
It is creating and updating everything we want, including api policies and has a relativity small learning curve.
You can start learning here:
https://learn.hashicorp.com/terraform?track=azure#azure
The docs for APIM are here:
https://www.terraform.io/docs/providers/azurerm/r/api_management.html
Once the initial learning curve is done the rest is easy and the benefits are many.
Azure Powershell is 100% cross platform now, so that's an option. Here are some samples: https://learn.microsoft.com/en-us/azure/api-management/powershell-samples
You can also use ARM Templates to spin it up. Configuring it is a lot harder. You can map any of these calls to the ARM Template.
Terraform - i think its still in the works. https://github.com/terraform-providers/terraform-provider-azurerm/issues/1177. But I wouldnt go that way.
ARM is the way to go.
You can combine it with:
Azure resource manager API for deployment
API Management API for things that ARM doesn't support (yet)
Have a look at the Azure API Management DevOps Resource Kit:
https://github.com/Azure/azure-api-management-devops-resource-kit
I believe the most convenient way for automating the deployment of Azure APIM is dotnet-apim. It's a cross-platform solution that you can easily use on your dev machine or ci/cd pipeline.
Make sure you have .NET Core installed.
Install dotnet-apim tool.
In a yaml file, you define the list of APIVersionSets, APIs, Products, Backends, Tags, etc. This YAML file defines what you want to deploy to APIM. You can have it on your source control to take the history of changes. The following YAML file defines 2 Sets, APIs, and Products along with their policies.
version: 0.0.1 # Required
apimServiceName: $(apimServiceName) # Required, must match name of an apim service deployed in the specified resource group
apiVersionSets:
- name: Set1
displayName: API Set 1
description: Contains Set 1 APIs.
versioningScheme: Segment
- name: Set2
displayName: API Set 2
description: Contains Set 2 APIs.
versioningScheme: Segment
apis:
- name: API1
displayName: API v1
openApiSpec: $(apimBasePath)\Apis\OpenApi.json # Required, can be url or local file
policy: $(apimBasePath)\Apis\ApiPolicy.xml
path: api/sample1
apiVersion: v1
apiVersionSetId: Set1
apiRevision: 1
products: AutomationTests, SystemMonitoring
protocols: https
subscriptionRequired: true
isCurrent: true
operations:
customer_get: # it's operation id
policy: $(apimBasePath)\Apis\HealthCheck\HealthCheckPolicy.xml:::BackendUrl=$(attachmentServiceUrl)
subscriptionKeyParameterNames:
header: ProviderKey
query: ProviderKey
- name: API2
displayName: API2 v1 [Staging]
openApiSpec: $(apimBasePath)\Apis\OpenApi.json # Required, can be url or local file
policy: $(apimBasePath)\Apis\ApiPolicy.xml
path: api/sample2
apiVersion: v1
apiVersionSetId: Set2
apiRevision: 1
products: AutomationTests, SystemMonitoring
protocols: https
subscriptionRequired: true
isCurrent: true
subscriptionKeyParameterNames:
header: ProviderKey
query: ProviderKey
products:
- name: AutomationTests
displayName: AutomationTests
description: Product for automation tests
subscriptionRequired: true
approvalRequired: true
subscriptionsLimit: 1
state: published
policy: $(apimBasePath)\Products\AutomationTests\policy.xml
- name: SystemMonitoring
displayName: SystemMonitoring
description: Product for system monitoring
subscriptionRequired: true
approvalRequired: true
subscriptionsLimit: 1
state: published
policy: $(apimBasePath)\Products\SystemMonitoring\policy.xml
outputLocation: $(apimBasePath)\output
linkedTemplatesBaseUrl : $(linkedTemplatesBaseUrl) # Required if 'linked' property is set to true
the $(variableName) is a syntax for defining variables inside the YAML file which makes customization easier in ci/cd scenarios.
The next step is to transform the YAML file to ARM which Azure can understand.
dotnet-apim --yamlConfig "c:/apim/definition.yml"
Then you have to deploy the generated ARM templates to Azure which is explained here.
Using the test-api (if you do the microsoft demo for API Mgmt, you should recognize it), here's a snippet of terraform that does work. does not include the resource group (thisrg)
resource "azurerm_api_management" "apimgmtinstance" {
name = "${var.base_apimgmt_name}-${var.env_name}-apim"
location = azurerm_resource_group.thisrg.location
resource_group_name = azurerm_resource_group.thisrg.name
publisher_name = "Marc Pub"
publisher_email = "marc#trash.com"
sku_name = var.apimgmt_size
/* policy {
xml_content = <<XML
<policies>
<inbound />
<backend />
<outbound />
<on-error />
</policies>
XML
} */
}
resource "azurerm_api_management_product" "apiMgmtProductContoso" {
product_id = "contoso-marc"
display_name = "Contoso Marc"
description = "this is a test"
subscription_required = true
approval_required = true
api_management_name = azurerm_api_management.apimgmtinstance.name
resource_group_name = azurerm_resource_group.thisrg.name
published = true
subscriptions_limit = 2
terms = "you better accept this or else... ;-)"
}
resource "azurerm_api_management_api" "testapi" {
description = "this is a mock test"
display_name = "Test API"
name = "test-api"
protocols = ["https"]
api_management_name = azurerm_api_management.apimgmtinstance.name
resource_group_name = azurerm_resource_group.thisrg.name
// version = "0.0.1"
revision = "1"
path = ""
subscription_required = true
}
data "azurerm_api_management_api" "testapi_data" {
name = azurerm_api_management_api.testapi.name
api_management_name = azurerm_api_management.apimgmtinstance.name
resource_group_name = azurerm_resource_group.thisrg.name
revision = "1"
}
resource "azurerm_api_management_api_operation" "testapi_getop" {
operation_id = "test-call"
api_name = data.azurerm_api_management_api.testapi_data.name
api_management_name = data.azurerm_api_management_api.testapi_data.api_management_name
resource_group_name = data.azurerm_api_management_api.testapi_data.resource_group_name
display_name = "Test call"
method = "GET"
url_template = "/test"
description = "test of call"
response {
status_code = 200
description = ""
representation {
content_type = "application/json"
sample = "{\"sampleField\": \"test\"}"
}
}
}
resource "azurerm_api_management_api_operation_policy" "testapi_getop_policy" {
api_name = azurerm_api_management_api_operation.testapi_getop.api_name
api_management_name = azurerm_api_management_api_operation.testapi_getop.api_management_name
resource_group_name = azurerm_api_management_api_operation.testapi_getop.resource_group_name
operation_id = azurerm_api_management_api_operation.testapi_getop.operation_id
xml_content = <<XML
<policies>
<inbound>
<mock-response status-code="200" content-type="application/json"/>
</inbound>
</policies>
XML
}
terraform now mostly supports azure api management. I've been implementing most of an existing azure api management into terraform using a combination of the https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/api_management along with terraform import (just do it in a separate folder, terraform import imports into a terraform.tfstate file, and if you mix up the resource you are importing along with the tf file(s) you are creating as a result's terraform.tfstate file (generated via terraform plan/apply), you could accidently delete the resource you are importing from. yay...
It mostly does it, except for an API where you modified for 'All operations". I can do it for specific operations (get blah, or post blahblah), but for all operations... I don't yet know.