why does my terraform not working with minikube? - terraform

I am trying to launch pods using terraform in minikube. While running terraformError Image apply I am getting an error, "zip: not a valid zip file".
provider "kubernetes" {
config_context_cluster = "minikube"
}
resource "kubernetes_pod" "echo" {
metadata {
name = "echo-example"
labels {
App = "echo"
} }
spec {
container {
image = "hashicorp/http-echo:0.2.1"
name = "example2"
args = ["-listen=:80", "-text='Hello World'"]
port {
container_port = 80
}
}
}
}

There are a lot of similar cases. For example this issue
You need to move your individual tf files to their own directories, and then you can point terraform to that directory.
The plan command only accepts directories, and the apply command will only take the entire dir, or plan output file (use -out on plan). I think this limitation is due to the fact that terraform requires a state file for each plan. Here is how I've setup my terraform project, note secrets.tfvars and terraform.tfvars are common between both terraform plans.
$ tree
.
├── 1-base
│ ├── provider.tf
│ ├── backend.tf
│ └── core.tf
├── 2-k8s
│ ├── 1-k8s.tf
│ ├── 2-helm.tf
│ ├── apps
│ ├── provider.tf
│ ├── backend.tf
│ ├── chart-builds
│ └── charts
├── secrets.tfvars
├── terraform.tfvars
└── todo.md
#From here you can run:
$ terraform init -var-file=secrets.tfvars ./1-base
$ terraform plan -var-file=secrets.tfvars ./1-base

Related

Failed to query available provider packages version 2.56.0 does not match configured version constraint

I am not sure how to resolve this error, tried various combination of versions but can not get this working;
Within my modules:
terraform {
required_version = "~> 1.0"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 2.98.0". #or 2.62.1 ,1.6.0 depending on the what resource the module is for
}
}
}
Within my main.tf file:
terraform {
required_version = "~> 1.0.1"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
}
azuread = {
source = "hashicorp/azuread"
}
external = {
source = "hashicorp/external"
}
random = {
source = "hashicorp/random"
}
sops = {
source = "carlpett/sops"
}
}
}
Error upon terraform init:
╷
│ Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider hashicorp/azurerm: locked provider registry.terraform.io/hashicorp/azurerm 2.56.0 does not match
│ configured version constraint ~> 2.62.1, ~> 2.98.0; must use terraform init -upgrade to allow selection of new versions
╵
This is providers requirements.
user:$ terraform providers
Providers required by configuration:
.
├── provider[registry.terraform.io/hashicorp/random]
├── provider[registry.terraform.io/carlpett/sops] 0.6.3
├── provider[registry.terraform.io/hashicorp/azurerm]
├── provider[registry.terraform.io/hashicorp/azuread] ~> 1.6.0
├── provider[registry.terraform.io/hashicorp/external]
├── module.azurerm_storagecontainer_container1
│ └── provider[registry.terraform.io/hashicorp/azurerm] ~> 2.98.0
├── module.azurerm_servicebusqueue_bus1
│ └── provider[registry.terraform.io/hashicorp/azurerm] ~> 2.62.1
├── module.azurerm_storageaccount
│ ├── provider[registry.terraform.io/hashicorp/random]
│ └── provider[registry.terraform.io/hashicorp/azurerm] ~> 2.98.0
├── module.azurerm_key_vault
│ ├── provider[registry.terraform.io/hashicorp/azurerm] ~> 2.98.0
│ └── provider[registry.terraform.io/hashicorp/azuread]
├── module.resourcegroup
│ └── provider[registry.terraform.io/hashicorp/azurerm] ~> 2.98.0
Providers required by state:
provider[registry.terraform.io/hashicorp/azuread]
provider[registry.terraform.io/hashicorp/azurerm]
provider[registry.terraform.io/hashicorp/random]
provider[registry.terraform.io/carlpett/sops]
So the issue was one of the module dependency "servicebus" was still using older version for azurerm which cause this failure. so I updated it to 2.98.0 and that got me going. Earlier I thought that would not matter different module can have different azurerm version but that assumption was wrong. In tf consumer make sure that all module dependency should use same provide version.

Terraform aws_iam_access_key secret in another division (using remote_state)

I have a Terraform infrastructure that is divided into "parts" that looks something like this.
.
├── network
│   ├── locals.tf
│   ├── main.tf
│   ├── outputs.tf
│   └── variables.tf
├── ecs
│   ├── locals.tf
│   ├── main.tf
│   ├── outputs.tf
│   └── variables.tf
└── sqs
├── locals.tf
├── main.tf
├── output.tf
└── variables.tf
In SQS, I'm creating a programmatic user with aws_iam_user and aws_iam_access_key.
resource "aws_iam_user" "sqs_write" {
name = "sqs-queue-name-read"
path = "/system/"
}
resource "aws_iam_access_key" "sqs_write" {
user = aws_iam_user.sqs_write.name
pgp_key = local.settings.gpg_public_key
}
Now I need to be able to use aws_iam_access_key.sqs_write.secret in my ECS division.
I tried sending the secret to an "output" and use it with data.terraform_remote_state in my ECS division but Terraform says the output does not exist (most likely because it is marked as sensitive = true.
I tried to save the aws_iam_access_key.sqs_write.secret to a SSM parameter with:
resource "aws_ssm_parameter" "write_secret" {
name = "sqs-queue-name-write-secret-access-key"
description = "SQS write secret access key"
key_id = "aws/secretsmanager"
type = "String"
value = aws_iam_access_key.sqs_write.secret
overwrite = true
}
But I get this error:
╷
│ Error: Missing required argument
│
│ with aws_ssm_parameter.write_secret,
│ on main.tf line 109, in resource "aws_ssm_parameter" "write_secret":
│ 109: value = aws_iam_access_key.sqs_write.secret
│
│ The argument "value" is required, but no definition was found.
╵
So I can't seem to find a way to use the "secret" value outside of my SQS division. I could use the "encrypted_secret" version of it that works fine, but I don't know how I could decrypt it directly from Terraform so I guess it is not an option.
Any thoughts?
My version is:
Terraform v1.0.2 on linux_amd64
provider registry.terraform.io/hashicorp/aws v3.52.0
provider registry.terraform.io/hashicorp/http v2.1.0

Using output in another module or resource

I've seen a good amount of posts that talk about passing a module's output into another module. For some reason I can't get this to work.
I can get the output of the module without any issues
$ terraform output
this_sg_id = sg-xxxxxxxxxxxxxxxxx
However, when I call the module in the resource or into another module, it asks me for the Security group ID.
$ terraform plan
var.vpc_security_group_ids
Security Group ID
Enter a value:
Here's my file structure:
── dev
│ └── service
│ └── dev_instance
│ ├── main.tf
│ ├── outputs.tf
│ ├── variables.tf
├── modules
│ ├── ec2
│ │ ├── build_ec2.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ └── sg
│ ├── build_sg.tf
│ ├── outputs.tf
│ └── variables.tf
Not sure if this is correct but in dev/service/dev_instance/main.tf:
module "build_sg" {
source = "../../../modules/sg/"
vpc_id = var.vpc_id
sg_name = var.sg_name
sg_description = var.sg_description
sg_tag = var.sg_tag
sg_tcp_ports = var.sg_tcp_ports
sg_tcp_cidrs = var.sg_tcp_cidrs
sg_udp_ports = var.sg_udp_ports
sg_udp_cidrs = var.sg_udp_cidrs
sg_all_ports = var.sg_all_ports
sg_all_cidrs = var.sg_all_cidrs
}
module "build_ec2" {
source = "../../../modules/ec2/"
vpc_security_group_ids = ["${module.build_sg.this_sg_id}"]
}
In dev/service/dev_instance/output.tf:
output "this_sg_id" {
description = "The security group ID"
value = "${module.build_sg.this_sg_id}"
}
My ec2 module build_ec2.tf file has the following:
resource "aws_instance" "ec2" {
vpc_security_group_ids = ["${module.build_sg.this_sg_id}"]
}
You have a var "vpc_security_group_ids" defined somewhere, I assume in one of your variables.tf files. Terraform doesn't automatically know to fill that in with the output from a module. You need to remove the var definition and just use the module output reference in your template.
Variables are used to pass in values from the command line. They are not tied to module outputs in any way. If you expect values to come from a module you are using then you should not be also defining that value as a variable.
I also think, you need to remove var definition from your variable tf file and use only module output reference.

Using terraform remote state in s3 with multiple folders

I am currently using the default workspace and my folder structure is like this -
dev
├── app
│ └── main.tf
├── mysql
│ └── main.tf
└── vpc
└── main.tf
I have a s3 backend created and it works fine for a single folder
terraform {
backend "s3" {
bucket = "mybucket"
key = "global/s3/mykey/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-state-wellness-nonprod"
encrypt = true
}
}
I am struggling on how to include this back-end config in all the folders, like i want to use the same backend s3 bucket in app, mysql and vpc (diff keys for Dynamodb) but while this works in one folder , in the second folder terraform wants to delete both S3 bucket and Dynamodb.
I recommend you use module structure in terraform code.
like :
dev
├──modules
│ ├── app
│ │ └── app.tf
│ ├── mysql
│ │ └── mysql.tf
│ └── vpc
│ └── vpc.tf
└──main.tf
main.tf :
module "app" {
source = "./modules/app"
...
}
module "mysql" {
source = "./modules/mysql"
...
}
module "vpc" {
source = "./modules/vpc"
...
}
terraform {
backend "s3" {
...
}
}
If you want to apply/destroy each module :
terraform apply -target module.app
terraform destroy -target module.app
See :
Here's a repository using module structure.

Import value from Terraform module

I have this skeleton for two Terraform modules I'm building: api-gateway and lambda. This is the file structure:
.
├── modules
│   ├── api-gateway
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   └── variables.tf
│   └── lambda
│   ├── main.tf
│   ├── outputs.tf
│   ├── policies
│   │   └── lambda-role.json
│   └── variables.tf
├── main.tf
├── provider.tf
├── sandbox-environment.tfvars
└── variables.tf
The (excerpt) content of modules/api-gateway/main.tf is:
resource "aws_api_gateway_integration" "lambda_root" {
...
uri = "${aws_lambda_function.fn_name.invoke_arn}"
}
resource "aws_api_gateway_integration" "lambda" {
...
uri = "${aws_lambda_function.fn_name.invoke_arn}"
}
module "lambda" {
source = "../lambda"
}
The (excerpt) content of modules/lambda/main.tf is:
resource "aws_lambda_function" "fn_name" {
filename = "${data.archive_file.fn_name.output_path}"
...
runtime = "java8"
}
The problem is I can't read the value ${aws_lambda_function.fn_name.invoke_arn} in modules/api-gateway/main.tf:
$ terraform init
Initializing modules...
- module.pipeline
Error: resource 'aws_api_gateway_integration.lambda_root' config: unknown resource 'aws_lambda_function.fn_name' referenced in variable aws_lambda_function.fn_name.invoke_arn
Error: resource 'aws_api_gateway_integration.lambda' config: unknown resource 'aws_lambda_function.fn_name' referenced in variable aws_lambda_function.fn_name.invoke_arn
Is there a way to "export" that value from within modules/api-gateway/main.tf?
You would need to add an output variable in modules/lambda/outputs.tf.
output "lambda_invoke_arn" {
value = "${aws_lambda_function.fn_name.invoke_arn}"
}
Then in the modules/api-gateway/main.tf, you can reference the output of the lambda module.
resource "aws_api_gateway_integration" "lambda" {
...
uri = "${module.lambda.lambda_invoke_arn}"
}

Resources