I have this code in terraform:
data "archive_file" "lambdazip" {
type = "zip"
output_path = "lambda_launcher.zip"
source_dir = "lambda/etc"
source_dir = "lambda/node_modules"
source {
content = "${data.template_file.config_json.rendered}"
filename = "config.json"
}
}
I get the following errors when I do terraform plan:
* data.archive_file.lambdazip: "source": conflicts with source_dir
("lambda/node_modules")
* data.archive_file.lambdazip: "source_content_filename": conflicts
with source_dir ("lambda/node_modules")
* data.archive_file.lambdazip: "source_dir": conflicts with
source_content_filename ("/home/user1/experiments/grascenote-
poc/init.tpl")
I am using terraform version v0.9.11
#Ram is correct. You cannot use both source_dir and source arguments in the same archive_file data source.
config_json.tpl
{"test": "${override}"}
Terraform
Terraform 0.12 and higher
Use templatefile()
main.tf
# create the template file config_json separately from the archive_file block
resource "local_file" "config" {
content = templatefile("${path.module}/config_json.tpl", {
override = "my value"
})
filename = "${path.module}/lambda/etc/config.json"
}
Terraform 0.11 and below
Use template provider.
main.tf
data "template_file" "config_json" {
template = "${file("${path.module}/config_json.tpl")}"
vars = {
override = "my value"
}
}
# create the template file config_json separately from the archive_file block
resource "local_file" "config" {
content = "${data.template_file.config_json.rendered}"
filename = "${path.module}/lambda/etc/config.json"
}
Next steps
Add to main.tf
# now you can grab the entire lambda source directory or specific subdirectories
data "archive_file" "lambdazip" {
type = "zip"
output_path = "lambda_launcher.zip"
source_dir = "${path.module}/lambda/"
depends_on = [
local_file.config,
]
}
Terraform run
$ terraform init
$ terraform apply
data.template_file.config_json: Refreshing state...
data.archive_file.lambdazip: Refreshing state...
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
+ local_file.config
id: <computed>
content: "{\"test\": \"my value\"}\n"
filename: "/Users/user/lambda/config.json"
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
local_file.config: Creating...
content: "" => "{\"test\": \"my value\"}\n"
filename: "" => "/Users/user/lambda/config.json"
local_file.config: Creation complete after 0s (ID: 05894e86414856969d915db57e21008563dfcc38)
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
List the contents of the new zip file
$ unzip -l lambda_launcher.zip
Archive: lambda_launcher.zip
Length Date Time Name
--------- ---------- ----- ----
21 01-01-2049 00:00 etc/config.json
22 01-01-2049 00:00 node_modules/index.js
--------- -------
43 2 files
In case of node.js lambda function,
You need to use "resource.local_file" with "depends_on".
And separate "rendering file" with "directory".
First, Put static directory(etc, node_modules) into "lambda" folder without rendering files.
Second, Put rendering files into any other path.
data "template_file" "config_json" {
template = "${file("${path.module}/config_json.tpl")}"
vars = {
foo = "bar"
}
}
resource "local_file" "config_json" {
content = "${data.template_file.config_json.rendered}"
filename = "${path.module}/lambda/config.json"
}
data "archive_file" "lambda_zip" {
type = "zip"
output_path = "${path.module}/lambda_function.zip"
source_dir = "${path.module}/lambda"
# It is important to this process.
depends_on = [
"local_file.config_json"
]
}
resource "aws_lambda_function" "lambda" {
filename = "${path.module}/lambda_function.zip"
function_name = "lambda_function"
role = "${aws_iam_role.lambda.arn}"
handler = "index.handler"
runtime = "nodejs10.x"
}
resource "aws_iam_role" "lambda" {
...
Related
I got this message when I run my terraform script:
Warning: Deprecated Resource
using archive_file as a resource is deprecated; consider using the data source instead
The question is how should I do this? I tried to read about the data source, but it didn't clear anything.
I use archive_file in lambda definition for zipping my lambda source and getting target zip hash.
resource "archive_file" "archive_csv_validate" {
type = "zip"
source_dir = "lambda/csv-validate"
output_path = "artifacts/csv-validate.zip"
}
resource "aws_lambda_function" "lambda_csv_validate_function" {
function_name = "csv-validate"
filename = archive_file.archive_csv_validate.output_path
source_code_hash = archive_file.archive_csv_validate.output_base64sha256
handler = "main.main"
role = aws_iam_role.lambda_iam_role.arn
runtime = "python3.9"
timeout = 900
}
Archive_file is now a data source.
You can transform your code as this:
data "archive_file" "archive_csv_validate" {
type = "zip"
source_dir = "lambda/csv-validate"
output_path = "artifacts/csv-validate.zip"
}
resource "aws_lambda_function" "lambda_csv_validate_function" {
function_name = "csv-validate"
filename = data.archive_file.archive_csv_validate.output_path
source_code_hash = data.archive_file.archive_csv_validate.output_base64sha256
handler = "main.main"
role = aws_iam_role.lambda_iam_role.arn
runtime = "python3.9"
timeout = 900
}
I am trying to create few storage account and some containers in each account. I need to create this as a module so that I can reuse it. The way that I am thinking to do this is by creating a variable such as
storageaccounts = [
{
name = "testbackupstorage11"
containers = ["logs", "web", "backups"]
},
{
name = "testbackupstorage12"
containers = ["logs-1", "web-1"]
}
]
I've created the following code. However, I think this line
count = length(var.storageaccounts.*.containers)
is giving me error. I want to loop through the storageaccount array, get the containers and assign the 'length' of the containers key to the 'count' inside the 'azurerm_storage_container' so that this block creates multiple storage account.
However, this doesn't work as expected, most likely because of *
I also tested with
count = length(var.storageaccounts[count.index].containers)
when I do this, I get the error
on ..\modules\storage\main.tf line 21, in resource "azurerm_storage_container" "this":
21: count = length(var.storageaccounts[count.index].containers)
The "count" object can be used only in "resource" and "data" blocks, and only
when the "count" argument is set.
How can I accomplish this? Or is there any better way?
Here is the full code.
resource "random_id" "this" {
count = length(var.storageaccounts)
keepers = {
storagename = 1
}
byte_length = 6
prefix = var.storageaccounts[count.index].name
}
resource "azurerm_storage_account" "this" {
count = length(var.storageaccounts)
name = substr(lower(random_id.this[count.index].hex), 0, 24)
resource_group_name = var.resourcegroup
location = var.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_storage_container" "this" {
count = length(var.storageaccounts.*.containers)
name = var.storageaccounts[count.index].containers[count.index]
storage_account_name = azurerm_storage_account.this[count.index].name
container_access_type = "private"
}
provider "random" {
version = "2.2"
}
locals {
storageaccounts = [
{
name = "testbackupstorage11"
containers = ["logs", "web", "backups"]
},
{
name = "testbackupstorage12"
containers = ["logs-1", "web-1"]
}
]
}
module "storage" {
source = "../modules/storage"
resourcegroup = "my-test"
location = "eastus"
storageaccounts = local.storageaccounts
}
provider "azurerm" {
version = "=2.0.0"
features {}
}
//variable "prefix" {}
variable "location" {}
variable "resourcegroup" {}
variable "storageaccounts" {
default = []
type = list(object({
name = string
containers = list(string)
}))
}
count = length(var.storageaccounts.*.containers) will return the length of var.storageaccounts which is 2.
count = length(var.storageaccounts[count.index].containers) would fail because you can't reference something that hasn't been declared.
What you can do is flatten the lists.
For example:
variables.tf
variable "storageaccounts" {
default = []
type = list(object({
name = string
containers = list(string)
}))
}
main.tf
resource "null_resource" "cluster" {
count = length(flatten(var.storageaccounts.*.containers))
provisioner "local-exec" {
command = "echo ${flatten(var.storageaccounts.*.containers)[count.index]}"
}
}
variables.tfvars
storageaccounts = [
{
name = "testbackupstorage11"
containers = ["logs", "web", "backups"]
},
{
name = "testbackupstorage12"
containers = ["logs-1", "web-1"]
}
]
The plan
terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# null_resource.cluster[0] will be created
+ resource "null_resource" "cluster" {
+ id = (known after apply)
}
# null_resource.cluster[1] will be created
+ resource "null_resource" "cluster" {
+ id = (known after apply)
}
# null_resource.cluster[2] will be created
+ resource "null_resource" "cluster" {
+ id = (known after apply)
}
# null_resource.cluster[3] will be created
+ resource "null_resource" "cluster" {
+ id = (known after apply)
}
# null_resource.cluster[4] will be created
+ resource "null_resource" "cluster" {
+ id = (known after apply)
}
Plan: 5 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
This plan was saved to: /path/plan
To perform exactly these actions, run the following command to apply:
terraform apply "/path/plan"
The application
terraform apply
/outputs/basics/plan
null_resource.cluster[1]: Creating...
null_resource.cluster[4]: Creating...
null_resource.cluster[3]: Creating...
null_resource.cluster[0]: Creating...
null_resource.cluster[2]: Creating...
null_resource.cluster[3]: Provisioning with 'local-exec'...
null_resource.cluster[1]: Provisioning with 'local-exec'...
null_resource.cluster[4]: Provisioning with 'local-exec'...
null_resource.cluster[0]: Provisioning with 'local-exec'...
null_resource.cluster[2]: Provisioning with 'local-exec'...
null_resource.cluster[3] (local-exec): Executing: ["/bin/sh" "-c" "echo logs-1"]
null_resource.cluster[2] (local-exec): Executing: ["/bin/sh" "-c" "echo backups"]
null_resource.cluster[4] (local-exec): Executing: ["/bin/sh" "-c" "echo web-1"]
null_resource.cluster[1] (local-exec): Executing: ["/bin/sh" "-c" "echo web"]
null_resource.cluster[0] (local-exec): Executing: ["/bin/sh" "-c" "echo logs"]
null_resource.cluster[2] (local-exec): backups
null_resource.cluster[2]: Creation complete after 0s [id=3936346761857660500]
null_resource.cluster[4] (local-exec): web-1
null_resource.cluster[3] (local-exec): logs-1
null_resource.cluster[0] (local-exec): logs
null_resource.cluster[1] (local-exec): web
null_resource.cluster[4]: Creation complete after 0s [id=3473332636300628727]
null_resource.cluster[3]: Creation complete after 0s [id=8036538301397331156]
null_resource.cluster[1]: Creation complete after 0s [id=8566902439147392295]
null_resource.cluster[0]: Creation complete after 0s [id=6115664408585418236]
Apply complete! Resources: 5 added, 0 changed, 0 destroyed.
The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.
State path: terraform.tfstate
length(var.storageaccounts.*.containers)
Your count doesn't make sense, you asking for a list of storageaccounts with the attribute containers. So it would be looking for
[
{
name = "testbackupstorage11"
containers = ["logs", "web", "backups"]
},
{
name = "testbackupstorage12"
containers = ["logs-1", "web-1"]
}
].containers
Try using a locals to merge all into one list:
locals{
storageaccounts = [for x in var.storageaccounts: x.containers]
} // Returns list of lists
Then
count = length(flatten(local.storageaccounts)) //all one big list
https://www.terraform.io/docs/configuration/functions/flatten.html
Sorry, not had a chance to test the code, but I hope this helps.
I am following this excellent guide to terraform. I am currently on the 3rd post exploring the state. Specifically at the point where terraform workspaces are demonstrated.
So, I have the following main.tf:
provider "aws" {
region = "us-east-2"
}
resource "aws_s3_bucket" "terraform_state" {
bucket = "mark-kharitonov-terraform-up-and-running-state"
# Enable versioning so we can see the full revision history of our
# state files
versioning {
enabled = true
}
# Enable server-side encryption by default
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
}
resource "aws_dynamodb_table" "terraform_locks" {
name = "terraform-up-and-running-locks"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
terraform {
backend "s3" {
# Replace this with your bucket name!
bucket = "mark-kharitonov-terraform-up-and-running-state"
key = "workspaces-example/terraform.tfstate"
region = "us-east-2"
# Replace this with your DynamoDB table name!
dynamodb_table = "terraform-up-and-running-locks"
encrypt = true
}
}
output "s3_bucket_arn" {
value = aws_s3_bucket.terraform_state.arn
description = "The ARN of the S3 bucket"
}
output "dynamodb_table_name" {
value = aws_dynamodb_table.terraform_locks.name
description = "The name of the DynamoDB table"
}
resource "aws_instance" "example" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
}
And it is all great:
C:\work\terraform [master ≡]> terraform workspace show
default
C:\work\terraform [master ≡]> terraform apply
Acquiring state lock. This may take a few moments...
aws_dynamodb_table.terraform_locks: Refreshing state... [id=terraform-up-and-running-locks]
aws_instance.example: Refreshing state... [id=i-01120238707b3ba8e]
aws_s3_bucket.terraform_state: Refreshing state... [id=mark-kharitonov-terraform-up-and-running-state]
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Releasing state lock. This may take a few moments...
Outputs:
dynamodb_table_name = terraform-up-and-running-locks
s3_bucket_arn = arn:aws:s3:::mark-kharitonov-terraform-up-and-running-state
C:\work\terraform [master ≡]>
Now I am trying to follow the guide - create a new workspace and apply the code there:
C:\work\terraform [master ≡]> terraform workspace new example1
Created and switched to workspace "example1"!
You're now on a new, empty workspace. Workspaces isolate their state,
so if you run "terraform plan" Terraform will not see any existing state
for this configuration.
C:\work\terraform [master ≡]> terraform plan
Acquiring state lock. This may take a few moments...
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_dynamodb_table.terraform_locks will be created
+ resource "aws_dynamodb_table" "terraform_locks" {
...
+ name = "terraform-up-and-running-locks"
...
}
# aws_instance.example will be created
+ resource "aws_instance" "example" {
+ ami = "ami-0c55b159cbfafe1f0"
...
}
# aws_s3_bucket.terraform_state will be created
+ resource "aws_s3_bucket" "terraform_state" {
...
+ bucket = "mark-kharitonov-terraform-up-and-running-state"
...
}
Plan: 3 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.
Releasing state lock. This may take a few moments...
C:\work\terraform [master ≡]>
And here the problems start. In the guide, the terraform plan command reports that only one resource is going to be created - an EC2 instance. This implies that terraform is going to reuse the same S3 bucket for the backend and the same DynamoDB table for the lock. But in my case, terraform informs me that it would want to create all the 3 resources, including the S3 bucket. Which would definitely fail (already tried).
So, what am I doing wrong? What is missing?
Creating a new workspace is effectively starting from scratch. The guide steps are a bit confusing in this regard but they are creating two plans to achieve the final result. The first creates the state S3 Bucket and the locking DynamoDB table and the second plan contains just the instance they are creating but uses the terraform code block to tell that plan where to store its state.
In your example you are both setting your state location and creating it in the same plan. This means when you create a new workspace its going to attempt to create that state location a second time because this workspace does not know about the other workspace's state.
In the end its important to know that using workspaces creates unique state files per workspace by appending the workspace name to the remote state path. For example if your state location is mark-kharitonov-terraform-up-and-running-state with a path of workspaces-example then you might see the following:
Default state: mark-kharitonov-terraform-up-and-running-state/workspaces-example/default/terraform.tfstate
Other state: mark-kharitonov-terraform-up-and-running-state/workspaces-example/other/terraform.tfstate
EDIT:
To be clear on how to get the guide results. You need to create two separate plans in separate folders (all plans in your working directory will run at the same time). So create a hierarchy like:
plans >
state >
main.tf
instance >
main.tf
Inside your plans/state/main.tf file put your state location content:
provider "aws" {
region = "us-east-2"
}
resource "aws_s3_bucket" "terraform_state" {
bucket = "mark-kharitonov-terraform-up-and-running-state"
# Enable versioning so we can see the full revision history of our
# state files
versioning {
enabled = true
}
# Enable server-side encryption by default
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
}
resource "aws_dynamodb_table" "terraform_locks" {
name = "terraform-up-and-running-locks"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
output "s3_bucket_arn" {
value = aws_s3_bucket.terraform_state.arn
description = "The ARN of the S3 bucket"
}
Then in your plans/instance/main.tf file you can reference the created state location with the terraform block and should only need the following content:
terraform {
backend "s3" {
# Replace this with your bucket name!
bucket = "mark-kharitonov-terraform-up-and-running-state"
key = "workspaces-example/terraform.tfstate"
region = "us-east-2"
# Replace this with your DynamoDB table name!
dynamodb_table = "terraform-up-and-running-locks"
encrypt = true
}
}
resource "aws_instance" "example" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
}
Terraform Version
Terraform v0.12.18
+ provider.aws v2.42.0
+ provider.grafana v1.5.0
+ provider.helm v0.10.4
+ provider.kubernetes v1.10.0
+ provider.local v1.4.0
+ provider.null v2.1.2
+ provider.ovh v0.5.0
+ provider.random v2.2.1
+ provider.template v2.1.2
+ provider.vault v2.7.0
Terraform Configuration Files
# mymodule/service.k8s.tf
resource "kubernetes_service" "wordpress" {
metadata {
labels = merge(var.deployment_config.labels, {
app = "myname
engine = "wordpress"
tier = "frontend"
})
}
...
}
# mymodule/variables.tf
variable "deployment_config" {
description = "The deployment configuration."
type = object({
labels = map(string)
})
}
# modules.tf
module "my-module" {
source = "./mymodule"
deployment_config = {
labels = {
environment = "production"
}
}
}
Error
An argument named "labels" is not expected here.
Expected Behavior
The metadata.labels argument of the resource "kubernetes_service" (and other kubernetes resources) attempt to receive a map(). This should simply create the resource with theses labels (of both maps):
labels = {
app = "myname"
engine = "wordpress"
tier = "frontend"
environment = "production"
}
Actual Behavior
Without the use of the merge() function by simply putting a single map is working.
But when the merge() function is used Terraform say that the argument is not excepted.
Steps to Reproduce
Have a module with an argument of type map(string) (in this case kubernetes_service with the argument metadata.labels) which merge a variable of type map(string) with an other one map(string).
Pass to this module the variable of type map(string).
Then, terraform apply
Question
Why using merge() function makes the argument not excepted ?
Thanks in advance for your help !
My guess is that in mymodule/service.k8s.tf, there is a missing end quotation for the map key app's value. When I tried to run a terraform apply with the missing end quotation, I received a Error: Invalid multi-line string. Perhaps that was the issue you received ?
This seems to work for me.
$ tree
.
├── main.tf
└── mymodule
└── main.tf
1 directory, 2 files
$ cat main.tf
module "mymodule" {
source = "./mymodule"
deployment_config = {
labels = {
environment = "production"
}
}
}
output "mymodule" {
value = module.mymodule
}
$ cat mymodule/main.tf
variable "deployment_config" {
description = "The deployment configuration."
type = object({
labels = map(string)
})
}
locals {
labels = merge(var.deployment_config.labels, {
app = "myname"
engine = "wordpress"
tier = "frontend"
})
}
output "deployment_config" {
value = local.labels
}
$ ls
main.tf mymodule/
$ terraform version
Terraform v0.12.18
$ terraform apply
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
mymodule = {
"deployment_config" = {
"app" = "myname"
"engine" = "wordpress"
"environment" = "production"
"tier" = "frontend"
}
}
I am trying to generate a bunch of files from templates. I need to replace the hardcoded 1 with the count.index, not sure what format terraform will allow me to use.
resource "local_file" "foo" {
count = "${length(var.files)}"
content = "${data.template_file.tenant_repo_multi.1.rendered}"
#TODO: Replace 1 with count index.
filename = "${element(var.files, count.index)}"
}
data "template_file" "tenant_repo_multi" {
count = "${length(var.files)}"
template = "${file("templates/${element(var.files, count.index)}")}"
}
variable "files" {
type = "list"
default = ["filebeat-config_filebeat.yml",...]
}
I am running with:
Terraform v0.11.7
+ provider.gitlab v1.0.0
+ provider.local v1.1.0
+ provider.template v1.0.0
You can iterate through the tenant_repo_multi data source like so -
resource "local_file" "foo" {
count = "${length(var.files)}"
content = "${element(data.template_file.tenant_repo_multi.*.rendered, count.index)}"
filename = "${element(var.files, count.index)}"
}
However, have you considered using the template_dir resource in the Terraform Template provider. An example below -
resource "template_dir" "config" {
source_dir = "./unrendered"
destination_dir = "./rendered"
vars = {
message = "world"
}
}