I am trying to do this, via Terraform code:
However, I can not find how. Is it some obscure resource or it is not implemented at all ?
You can use the azurerm_monitor_diagnostic_setting to configure the setting as ydaetskcoR said, it works like the screenshot you provided shows. Here is the example code:
resource "azurerm_monitor_diagnostic_setting" "example" {
name = "example"
target_resource_id = "application_gateway_resource_id"
storage_account_id = data.azurerm_storage_account.example.id
log {
category = "ApplicationGatewayFirewallLog"
enabled = true
retention_policy {
enabled = true
days = 30
}
}
}
Terraform does not support Data for application gateway, so you need to input the resource id of the existing application gateway yourself, or quote the id when you create the new application gateway.
It seems like logs are not supported by Terraform for Azure WAF (ApplicationGateway) yet.
Related
When creating an App Service Plan on my new-ish (4 day old) subscription using Terraform, I immediately get a throttling error
App Service Plan Create operation is throttled for subscription <subscription>. Please contact support if issue persists
The thing is, when I then go to the UI and create an identical service plan, I receive no errors and it creates without issue, so it's clear that there is actually no throttling issue for creating the app service plan since I can make it.
I'm wondering if anyone knows why this is occurring?
NOTE
I've gotten around this issue by just creating the resource in the UI and then importing it into my TF state... but since the main point of IaC is automation, I'd like to ensure that this unusual behavior does not persist when I go to create new environments.
EDIT
My code is as follows
resource "azurerm_resource_group" "frontend_rg" {
name = "${var.env}-${var.abbr}-frontend"
location = var.location
}
resource "azurerm_service_plan" "frontend_sp" {
name = "${var.env}-${var.abbr}-sp"
resource_group_name = azurerm_resource_group.frontend_rg.name
location = azurerm_resource_group.frontend_rg.location
os_type = "Linux"
sku_name = "B1"
}
EDIT 2
terraform {
backend "azurerm" {}
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "3.15.0"
}
}
}
I am trying to obtain (via terraform) the dns name of a dynamically created VPCE endpoint using a data resource but the problem I am facing is the service name is not known until resources have been created. See notes below.
Is there any way of retrieving this information as a hard-coded service name just doesn’t work for automation?
e.g. this will not work as the service_name is dynamic
resource "aws_transfer_server" "sftp_lambda" {
count = local.vpc_lambda_enabled
domain = "S3"
identity_provider_type = "AWS_LAMBDA"
endpoint_type = "VPC"
protocols = ["SFTP"]
logging_role = var.loggingrole
function = var.lambda_idp_arn[count.index]
endpoint_details = {
security_group_ids = var.securitygroupids
subnet_ids = var.subnet_ids
vpc_id = var.vpc_id
}
tags = {
NAME = "tf-test-transfer-server"
ENV = "test"
}
}
data "aws_vpc_endpoint" "vpce" {
count = local.vpc_lambda_enabled
vpc_id = var.vpc_id
service_name = "com.amazonaws.transfer.server.c-001"
depends_on = [aws_transfer_server.sftp_lambda]
}
output "transfer_server_dnsentry" {
value = data.aws_vpc_endpoint.vpce.0.dns_entry[0].dns_name
}
Note: The VPCE was created automatically from an AWS SFTP transfer server resource that was configured with endpoint type of VPC (not VPC_ENDPOINT which is now deprecated). I had no control over the naming of the endpoint service name. It was all created in the background.
Minimum AWS provider version: 3.69.0 required.
Here is an example cloudformation script to setup an SFTP transfer server using Lambda as the IDP.
This will create the VPCE automatically.
So my aim here is to output the DNS name from the auto-created VPC endpoint using terraform if at all possible.
example setup in cloudFormation
data source: aws_vpc_endpoint
resource: aws_transfer_server
I had a response from Hashicorp Terraform Support on this and this is what they suggested:
you can get the service SFTP-Server-created-VPC-Endpoint by calling the following exported attribute of the vpc_endpoint_service resource [a].
NOTE: There are certain setups that causes AWS to create additional resources outside of what you configured. The AWS SFTP transfer service is one of them. This behavior is outside Terraform's control and more due to how AWS designed the service.
You can bring that VPC Endpoint back under Terraform's control however, by importing the VPC endpoint it creates on your behalf AFTER the transfer service has been created - via the VPCe ID [b].
If you want more ideas of pulling the service name from your current AWS setup, feel free to check out this example [c].
Hope that helps! Thank you.
[a] https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc_endpoint_service#service_name
[b] https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc_endpoint#import
[c] https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc_endpoint#gateway-load-balancer-endpoint-type
There is a way forward like I shared earlier with the imports but it not going to be fully automated unfortunately.
Optionally, you can use a provisioner [1] and the aws ec2 describe-vpc-endpoint-services --service-names command [2] to get the service names you need.
I'm afraid that's the last workaround I can provide, as explained in our doc here [3] - which will explain how - as much as we'd like to, Terraform isn't able to solve all use-cases.
[1] https://www.terraform.io/language/resources/provisioners/remote-exec
[2] https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/describe-vpc-endpoint-services.html
[3] https://www.terraform.io/language/resources/provisioners/syntax
I've finally found the solution:
data "aws_vpc_endpoint" "transfer_server_vpce" {
count = local.is_enabled
vpc_id = var.vpc_id
filter {
name = "vpc-endpoint-id"
values = ["${aws_transfer_server.transfer_server[0].endpoint_details[0].vpc_endpoint_id}"]
}
}
I'm having a bit of problem parsing the differences between Azure's name for things and Terraform's name for things, but overall I'm making a good go of it. I am having some specific problems, though. My situation is that someone built the APIM using the Azure portal, and the company now wants to "make it scalable" by using Terraform to build it out. I've got a pretty good riff going - define, plan, import, plan, modify - but there are some parts of Azure APIM that can't map (mentally) to Terraform commands. My first one is this screen right here (the definitions tab of an API in APIM:)
Since I'm still fresh in terms of rep on Stack, I can't actually show the image. But in the portal at the bottom of the API there is a tab called "definitions". I haven't been able to see a) how to "get" them using Azure Powershell, and b) I how to "set" them with Terraform.
Would someone more knowledgeable about AzureRM and Terraform be able to steer me in the right direction please?
One of the workaround you can follow to deploy an API management instance with api's.
We have tried to create APIM instance with API,
Here is the sample terraform code that we used you can use it by adding resource name according to your requirement.
example.tf:-
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "example" {
name = "example-resources"
location = "West Europe"
}
resource "azurerm_api_management" "example" {
name = "example-apimajmt"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
publisher_name = "My Company"
publisher_email = "company#terraform.io"
sku_name = "Developer_1"
}
resource "azurerm_api_management_api" "example" {
name = "example-apiajmt"
resource_group_name = azurerm_resource_group.example.name
api_management_name = azurerm_api_management.example.name
revision = "1"
display_name = "ajtest API"
path = "example"
protocols = ["https"]
import {
content_format = "swagger-link-json"
content_value = "http://conferenceapi.azurewebsites.net/?format=json"
}
}
After creation we can use it for adding tags
/* resource "azurerm_api_management_api_tag" "example" {
api_id = azurerm_api_management_api.example.id
name = "example-tagajmt"
}*/
Once the terraform apply is done then you will able to get the APIM instance along with the API and their tags after sometimes.
NOTE:- Creation of APIM will take upto 45 minutes.
OUTPUT SCREENSHOT FOR REFERENCE:-
For more information with configuration in APIM management by terraform please refer to this HashiCorp| Terraform Registry azurerm_api_management & this Similar SO THREAD|Tag an API in Azure API Management with Terraform.
I'm having some trouble getting the azurerm & databricks provider to work together.
With the azurerm provider, setup my workspace
resource "azurerm_databricks_workspace" "ws" {
name = var.workspace_name
resource_group_name = azurerm_resource_group.rg.name
location = azurerm_resource_group.rg.location
sku = "premium"
managed_resource_group_name = "${azurerm_resource_group.rg.name}-mng-rg"
custom_parameters {
virtual_network_id = data.azurerm_virtual_network.vnet.id
public_subnet_name = var.public_subnet
private_subnet_name = var.private_subnet
}
}
No matter how I structure this, I can't say seem to get the azurerm_databricks_workspace.ws.id to work in the provider statement for databricks in the the same configuration. If it did work, the above workspace would be defined in the same configuration and I'd have a provider statement that looks like this:
provider "databricks" {
azure_workspace_resource_id = azurerm_databricks_workspace.ws.id
}
Error:
I have my ARM_* environment variables set to identify as a Service Principal with Contributor on the subscription.
I've tried in the same configuration & in a module and consuming outputs. The only way I can get it to work is by running one configuration for the workspace and a second configuration to consume the workspace.
This is super suboptimal in that I have a fair amount of repeating values across those configurations and it would be ideal just to have one.
Has anyone been able to do this?
Thank you :)
I've had the exact same issue with a not working databricks provider because I was working with modules. I separated the databricks infra (Azure) with databricks application (databricks provider).
In my databricks module I added the following code at the top, otherwise it would use my azure setup:
terraform {
required_providers {
databricks = {
source = "databrickslabs/databricks"
version = "0.3.1"
}
}
}
In my normal provider setup I have the following settings for databricks:
provider "databricks" {
azure_workspace_resource_id = module.databricks_infra.databricks_workspace_id
azure_client_id = var.ARM_CLIENT_ID
azure_client_secret = var.ARM_CLIENT_SECRET
azure_tenant_id = var.ARM_TENANT_ID
}
And of course I have the azure one. Let me know if it worked :)
If you experience technical difficulties with rolling out resources in this example, please make sure that environment variables don't conflict with other provider block attributes. When in doubt, please run TF_LOG=DEBUG terraform apply to enable debug mode through the TF_LOG environment variable. Look specifically for Explicit and implicit attributes lines, that should indicate authentication attributes used. The other common reason for technical difficulties might be related to missing alias attribute in provider "databricks" {} blocks or provider attribute in resource "databricks_..." {} blocks. Please make sure to read alias: Multiple Provider Configurations documentation article.
From the error message, it looks like Authentication is not configured for provider could you please configure it through the one of following options mentioned above.
For more details, refer Databricks provider - Authentication.
For passing the custom_parameters, you may checkout the SO thread which addressing the similar issue.
In case if you need more help on this issue, I would suggest to open an issue here: https://github.com/terraform-providers/terraform-provider-azurerm/issues
Currently there exists a module to create a Log Diagnostic Setting for Azure Resources linked here. Using the portal I am able to generate a log diagnostic setting for activity logs as well as mentioned here. I was trying to enable activity logs diagnostic settings and send logs to a Storage account and only came across this module.
However it seems that it is not possible to use this module to send Activity logs to a Log analytics workspace. It also does not support the Log categories which are mentioned in the portal (i.e Administrative,Security, ServiceHealth etc) and only provides Action,Delete and Write. This leads me to believe that they are not intended to be used for the same purpose. The first module requires a target_resource_id and since Activity logs exist in the subscription level no such id exists.
As such is it possible to use the first mentioned module, or an entirely different module to enable diagnostic settings? Any help regarding the matter would be appreciated
You can configure this by specifying the subscription id as the target_resource_id within a azurerm_monitor_diagnostic_setting resource.
Example:
resource "azurerm_monitor_diagnostic_setting" "example" {
name = "example"
target_resource_id = "/subscriptions/85306735-db49-41be-b899-b0fc48095b01"
eventhub_name = azurerm_eventhub.diagnostics.name
eventhub_authorization_rule_id = azurerm_eventhub_namespace_authorization_rule.diagnostics.id
log {
category = "Administrative"
retention_policy {
enabled = false
}
}
You should use the attribute "log_analytics_workspace_id"
resource "azurerm_monitor_diagnostic_setting" "example" {
name = "example"
target_resource_id = "/subscriptions/xxxx"
log_analytics_workspace_id = azurerm_log_analytics_workspace.this.id
log_analytics_destination_type = "Dedicated" # or null see [documentation][1]
log {
category = "Administrative"
retention_policy {
enabled = false
}
}