I am attempting to use terraform and embedded ARM templates to permit creating a simple logic app in Azure. I have the resource block in terraform as:
resource "azurerm_resource_group_template_deployment" "templateTEST" {
name = "arm-Deployment"
resource_group_name = azurerm_resource_group.rg.name
deployment_mode = "Incremental"
template_content = file("${path.module}/arm/createLogicAppsTEST.json")
parameters_content = jsonencode({
logic_app_name = { value = "logic-${var.prefix}" }
})
}
and the createLogicAppsTEST.json file is defined (just the top few lines as)
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"logic_app_name": {
"defaultValue": "tsa-logic-dgtlbi-stage-001",
"type": "string"
}
},
"variables": {},
"resources": [
{
....
When deploying and running the first time, ie. creating the logic app resource using terraform and the embedded ARM template, it will create passing the name correctly given the:
parameters_content = jsonencode({
logic_app_name = { value = "logic-${var.prefix}" }
})
however, if I ever run again, terraform appears to ignore the parameters I am passing and goes with the default from the ARM template as:
"logic_app_name": {
"defaultValue": "tsa-logic-dgtlbi-stage-001",
"type": "string"
}
I have updated to the latest version of both terraform (0.14.2) and azurerm (2.40.0), yet the issue persists. At present, this kind of makes ARM in terraform problematic given different tiers, dev, test and prod at my company have different prefixes and names ie. prod-, dev-.
Is there a setting to make terraform actually use the parameters I am passing with azurerm_resource_group_template_deployment resource block?
After my validation, you could use the ignore_changes field in the nested block lifecycle. It will tell terraform ignore when planning updates to the associated remote object.
For example,
resource "azurerm_resource_group_template_deployment" "templateTEST" {
name = "arm-Deployment"
resource_group_name = azurerm_resource_group.rg.name
deployment_mode = "Incremental"
template_content = file("${path.module}/arm/createLogicAppsTEST.json")
parameters_content = jsonencode({
logic_app_name = { value = "logic-${var.prefix}" }
})
lifecycle {
ignore_changes = [template_content,]
}
}
However, in this case, it would be better to declare empty parameters without default values in the embedded ARM templates instead you can pass the real parameters via the parameters_content.
For example, declare the parameter like this in the ARM template. This will always use the content of the external parameters.
"logic_app_name": {
"type": "string"
}
I elected to just use the old provider, there is actually an open bug report about this same issue on github
Related
Consider the following Terraform template:
terraform {
required_providers {
azurecaf = {
source = "aztfmod/azurecaf"
version = "1.2.23"
}
}
}
provider "azurerm" {
features {}
}
resource "azurecaf_name" "rg_name" {
name = var.appname
resource_type = "azurerm_resource_group"
prefixes = ["dev"]
suffixes = ["y", "z"]
random_length = 5 // <------ random part in name generation
clean_input = false
}
resource "azurerm_resource_group" "example" {
name = azurecaf_name.rg_name.result
location = var.resource_group_location
}
I applied this template, than ran terraform plan. Terraform plan tells me
No changes. Your infrastructure matches the configuration.
How does that work as azurecaf_name.rg_name contains random characters? I would have expected it to create a new resource group with a new (random) name. I know that Terraform keeps a state, but doesn't it execute the template every time (=new random name) and then checks if that matches state and real resources in cloud?
The random characters are stored in the state file along with the other attributes of the resource like random_length, prefixes, suffixes. When you rerun terraform it will first check have any of the attributes in your configuration changed from whats in the state file. In your case all the values like random_length are the same. So terraform knows it doesnt need to regenerate anything like the random attributes since your configuration has not changed. Terraform will then check the state of the resource in the remote provider using the resource ID generated by the remote provider. If it doesnt exist terraform will create it with the same data. if it does exist but its attributes dont match then it will update the resource. if it checks its values there match whats in the state file. If they are the same then terraform knows no changes are needed.
We can see this in the random_pet resource
resource "random_pet" "foo" {
length = 3
prefix = "foo"
}
State file
"resources": [
{
"mode": "managed",
"type": "random_pet",
"name": "foo",
"provider": "provider[\"registry.terraform.io/hashicorp/random\"]",
"instances": [
{
"schema_version": 0,
"attributes": {
"id": "foo-evidently-secure-killdeer",
"keepers": null,
"length": 3,
"prefix": "foo",
"separator": "-"
},
"sensitive_attributes": []
}
]
}
],
Normally resources that have a form of randomness
For some reason i am having lots of problems with the group template deployment resource. we deploy a logic app during creation, however, in a second run, terraform plan detects changes even after specifying the ignore changes flag to all in the lifecycle bracket.
Unsure if this is normal behavior, any help would be appreciated
resource "azurerm_resource_group_template_deployment" "deploylogicapp" {
name = "template_deployment"
resource_group_name = azurerm_resource_group.bt_security.name
deployment_mode = "Incremental"
template_content = <<TEMPLATE
{
"ARM template json body"
}
TEMPLATE
lifecycle {
ignore_changes=all
}
}
EDIT
Managed to find the issue.
Added the lifecycle bracket to the app workflow resource
resource "azurerm_logic_app_workflow" "logicapp" {
name = "azdevops-app"
location = azurerm_resource_group.bt_security.location
resource_group_name = azurerm_resource_group.bt_security.name
lifecycle {
ignore_changes = all
}
}
Then added template_content in the arm group template resource, instead of 'all'
Ran terraform plan twice, and it did not detect any changes (2nd round), which is what we wanted.
I am trying to code an Azure Data Factory in Terraform, but I am not sure how to code this REST dataset:
{
"name": "RestResource1",
"properties": {
"linkedServiceName": {
"referenceName": "API_Connection",
"type": "LinkedServiceReference"
},
"annotations": [],
"type": "RestResource",
"schema": []
},
"type": "Microsoft.DataFactory/factories/datasets"
}
I don't see one in the azurerm documentation. Can one instead use an azurerm_data_factory_dataset_http resource instead?
azurerm_data_factory_linked_service_rest - Does not currently exist.
azurerm_data_factory_linked_service_web - This only support a web
table and not a REST API endpoint and can't be used with the Azure
integrated runtime.
As I tried to create the linked service using rest and http it always redirected to create a web table using terraform. Hence, for now the fix for this is to use azurerm_data_factory_linked_custom_service.
Here, is the example: How to create a Custom Linked service :
provider "azurerm" {
features{}
}
data "azurerm_resource_group" "example" {
name = "Your Resource Group"
}
data "azurerm_data_factory" "example" {
name = "vipdashadf"
resource_group_name = data.azurerm_resource_group.example.name
}
resource "azurerm_data_factory_linked_custom_service" "example" {
name = "ipdashlinkedservice"
data_factory_id = data.azurerm_data_factory.example.id
type = "RestService"
description = "test for rest linked"
type_properties_json = <<JSON
{
"url": "http://www.bing.com",
"enableServerCertificateValidation": false,
"authenticationType": "Anonymous"
}
JSON
annotations = []
}
resource "azurerm_data_factory_dataset_http" "example" {
name = "apidataset"
resource_group_name = data.azurerm_resource_group.example.name
data_factory_name = data.azurerm_data_factory.example.name
linked_service_name = azurerm_data_factory_linked_custom_service.example.name
relative_url = "http://www.bing.com"
request_body = "foo=bar"
request_method = "POST"
}
Outputs:
Linked Service - ipdashlinkservice type Rest Connector
Dataset: apidataset
You could find the same stated in the GitHub discussion: Support for Azure Data Factory Linked Service for REST API #9431
Using terraform and Azure ARM template, want to use Terraform output as input for Azure ARM Template. Need help to decide what will the best way to perform this action.
How to store the terraform output in KeyVault/Storage/Tables
Use the output for ARM template input
Thanks in advance.
How to store the terraform output in KeyVault/Storage/Tables
The output of the Terraform is defined by yourself. So you can decide what is the output. And to store them in the KeyVault/Storage/Tables, just create the resources of them. For example, you want to store the VM resource Id as a secret, then you can do it like this:
resource "azurerm_virtual_machine" "main" {
name = "${var.prefix}-vm"
location = azurerm_resource_group.main.location
resource_group_name = azurerm_resource_group.main.name
...
}
...
resource "azurerm_key_vault_secret" "example" {
name = "secret-sauce"
value = azurerm_virtual_machine.main.id
key_vault_id = azurerm_key_vault.example.id
tags = {
environment = "Production"
}
}
Use the output for ARM template input
For the ARM template in Terraform, you can take a look at the example. And you can there is a property parameters in it. All the parameters in the ARM template, you can add the inputs in the parameters to set them. For example:
resource "azurerm_template_deployment" "example" {
name = "acctesttemplate-01"
resource_group_name = azurerm_resource_group.example.name
template_body = <<DEPLOY
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"storageAccountType": {
"type": "string",
"defaultValue": "Standard_LRS",
"allowedValues": [
"Standard_LRS",
"Standard_GRS",
"Standard_ZRS"
],
"metadata": {
"description": "Storage Account type"
}
}
},
...
}
DEPLOY
# these key-value pairs are passed into the ARM Template's `parameters` block
parameters = {
"storageAccountType" = "Standard_GRS"
}
deployment_mode = "Incremental"
}
You can see the parameter storageAccountType in both parameters property of the resource azurerm_template_deployment and the ARM template. Also, the example that you want to set the VM resource Id for the parameter in ARM template with the output of the Terraform, you can do it like this to pass the Terraform output into the ARM template:
resource "azurerm_template_deployment" "example" {
name = "acctesttemplate-01"
resource_group_name = azurerm_resource_group.example.name
template_body = <<DEPLOY
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"vm_resource_id": {
"type": "string"
}
},
...
}
DEPLOY
# these key-value pairs are passed into the ARM Template's `parameters` block
parameters = {
"vm_resource_id" = azurerm_virtual_machine.main.id
}
deployment_mode = "Incremental"
}
I have an EC2 instance which is manually created on aws. I need to run a bash script inside my instance using terraform without recreating the EC2 instance.this is my tf file for this .
instance.tf
resource "aws_key_pair" "mykey" {
key_name = "mykey"
public_key = file(var.PUBLIC_KEY)
}
resource "aws_instance" "example" {
key_name = aws_key_pair.mykey.key_name
provisioner "file" {
source="script.sh"
destination="/tmp/script.sh"
}
connection {
type ="ssh"
user ="ubuntu"
private_key=file(var.PRIVATE_KEY)
host = coalesce(self.public_ip, self.private_ip)
}
}
vars.tf
variable "INSTANCE_USERNAME" {
default = "ubuntu"
}
variable "PUBLIC_KEY" {
default = "mykey.pub"
}
variable "PRIVATE_KEY" {
default ="mykey"
}
variable "AMIS" {}
variable "INSTANCE_TYPE" {}
provider.tf
provider "aws" {
access_key = "sd**********"
secret_key = "kubsd**********"
region = "us-east-2"
}
I have imported my current state using
terraform import aws_instance.example instance-id
This is my state file
{
"version": 4,
"terraform_version": "0.12.17",
"serial": 1,
"lineage": "54385313-09b6-bc71-7c9c-a3d82d1f7d2f",
"outputs": {},
"resources": [
{
"mode": "managed",
"type": "aws_instance",
"name": "example",
"provider": "provider.aws",
"instances": [
{
"schema_version": 1,
"attributes": {
"ami": "ami-0d5d9d301c853a04a",
"arn": "arn:aws:ec2:us-east-2:148602461879:instance/i-054caec795bbbdf2d",
"associate_public_ip_address": true,
"availability_zone": "us-east-2c",
"cpu_core_count": 1,
"cpu_threads_per_core": 1,
"credit_specification": [
{
"cpu_credits": "standard"
}
],
continues...
But when i run terraform plan it is showing error like
Error: Missing required argument
on instance.tf line 5, in resource "aws_instance" "example":
5: resource "aws_instance" "example" {
The argument "ami" is required, but no definition was found.
Error: Missing required argument
on instance.tf line 5, in resource "aws_instance" "example":
5: resource "aws_instance" "example" {
The argument "instance_type" is required, but no definition was found.
I couldn't understand why it is asking for instance_type and ami . Its is present inside the terraform.tf state after importing my state . Do i need to pass this data manually ? Is there any way to automate this process?
The terraform import command exists to allow you to bypass the usual requirement that Terraform must be the one to create a remote object representing each resource in your configuration. You should think of it only as a way to tell Terraform that your resource "aws_instance" "example" block is connected to the remote object instance-id (using the placeholder you showed in your example). You still need to tell Terraform the desired configuration for that object, because part of Terraform's role is to notice when your configuration disagrees with the remote objects (via the state) and make a plan to correct it.
Your situation here is a good example of why Terraform doesn't just automatically write out the configuration for you in terraform import: you seem to want to set these arguments from input variables, but Terraform has no way to know that unless you write that out in the configuration:
resource "aws_instance" "example" {
# I'm not sure how you wanted to map the list of images to one
# here, so I'm just using the first element for example.
ami = var.AMIS[0]
instance_type = var.INSTANCE_TYPE
key_name = aws_key_pair.mykey.key_name
}
By writing that out explicitly, Terraform can see where the values for those arguments are supposed to come from. (Note that idiomatic Terraform style is for variables to have lower-case names like instance_type, not uppercase names like INSTANCE_TYPE.)
There is no way to use existing values from the state to populate the configuration because that is the reverse direction of Terraform's flow: creating a Terraform plan compares the configuration to the state and detects when the configuration is different, and then generates actions that will make the remote objects match the configuration. Using the values in the state to populate the configuration would defeat the object, because Terraform would never be able to detect any differences.