Get Istio Ingress Gateway Endpoint With Terraform - terraform

I deployed Istio in AWS EKS, and I would like to get the endpoint of the automatically created load balancers to use them in other resources, like DNS entries and such. Is there any way to gather them?
I tried data "external" to run kubectl, but does not really work well.
Any ideas?

I found what I needed. kubernetes_service data source
data "kubernetes_service" "istio-ingressgateway" {
metadata {
name = "istio-ingressgateway"
namespace = "istio-system"
}
}
data "kubernetes_service" "public-istio-ingressgateway" {
metadata {
name = "public-istio-ingressgateway"
namespace = "istio-system"
}
}
output "istio-ingressgateway" {
value = data.kubernetes_service.istio-ingressgateway.status.0.load_balancer.0.ingress.0.hostname
}
output "public-istio-ingressgateway" {
value = data.kubernetes_service.public-istio-ingressgateway.status.0.load_balancer.0.ingress.0.hostname
}
https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/data-sources/service

Related

Terraform Import Resources and Looping Over Those Resources

I am new to Terraform and looking to utilize it for management of Snowflake environment using the provider of "chanzuckerberg/snowflake". I am specifically looking to leverage it for managing an RBAC model for roles within Snowflake.
The scenario is that I have about 60 databases in Snowflake which would equate to a resource for each in Terraform. We will then create 3 roles (reader, writer, all privileges) for each database. We will expand our roles from there.
The first question is, can I leverage map or object variables to define all database names and their attributes and import them using a for_each within a single resource or do I need to create a resource for each database and then import them individually?
The second question is, what would be the best approach for creating the 3 roles per database? Is there a way to iterate over all the resources of type snowflake_database and create the 3 roles? I was imagining the use of modules, variables, and resources based on the research I have done.
Any help in understanding how this can be accomplished would be super helpful. I understand the basics of Terraform but this is a bit of a complex situation for a newbie like myself to visualize enough to implement it. Thanks all!
Update:
This is what my project looks like and the error I am receiving is below it.
variables.tf:
variable "databases" {
type = list(object(
{
name = string
comment = string
retention_days = number
}))
}
databases.auto.tfvars:
databases = [
{
name = "TEST_DB1"
comment = "Testing state."
retention_days = 90
},
{
name = "TEST_DB2"
comment = ""
retention_days = 1
}
]
main.tf:
terraform {
required_providers {
snowflake = {
source = "chanzuckerberg/snowflake"
version = "0.25.25"
}
}
}
provider "snowflake" {
username = "user"
account = "my_account"
region = "my_region"
password = "pwd"
role = "some_role"
}
resource "snowflake_database" "sf_database" {
for_each = { for idx, db in var.databases: idx => db }
name = each.value.name
comment = each.value.comment
data_retention_time_in_days = each.value.retention_days
}
To Import the resource I run:
terraform import snowflake_database.sf_databases["TEST_DB1"]
db_test_db1
I am left with this error:
Error: resource address
"snowflake_database.sf_databases["TEST_DB1"]" does not exist in the
configuration.
Before importing this resource, please create its configuration in the
root module. For example:
resource "snowflake_database" "sf_databases" { # (resource
arguments) }
You should be able to define the databases using for_each and referring to the actual resources with brackets in the import command. Something like:
terraform import snowflake_database.resource_id_using_for_each[foreachkey]
You could then create three snowflake_role and three snowflake_database_grant definitions using for_each over the same map of databases used for the database definitions.
had this exact same problem and in the end the solution was quite simple.
you just need to wrap your import statement within single brackets.
so instead of
terraform import snowflake_database.sf_databases["TEST_DB1"] db_test_db1
do
terraform import 'snowflake_database.sf_databases["TEST_DB1"]' db_test_db1
this took way to long to figure out!

How do I apply a CRD from github to a cluster with terraform?

I want to install a CRD with terraform, I was hoping it would be easy as doing this:
data "http" "crd" {
url = "https://raw.githubusercontent.com/kubernetes-sigs/application/master/deploy/kube-app-manager-aio.yaml"
request_headers = {
Accept = "text/plain"
}
}
resource "kubernetes_manifest" "install-crd" {
manifest = data.http.crd.body
}
But I get this error:
can't unmarshal tftypes.String into *map[string]tftypes.Value, expected
map[string]tftypes.Value
Trying to convert it to yaml with yamldecode also doesn't work because yamldecode doesn't support multi-doc yaml files.
I could use exec, but I was already doing that while waiting for the kubernetes_manifest resource to be released. Does kubernetes_manifest only support a single resource or can it be used to create several from a raw text manifest file?
kubernetes_manifest (emphasis mine)
Represents one Kubernetes resource by supplying a manifest attribute
That sounds to me like it does not support multiple resources / a multi doc yaml file.
However you can manually split the incoming document and yamldecode the parts of it:
locals {
yamls = [for data in split("---", data.http.crd.body): yamldecode(data)]
}
resource "kubernetes_manifest" "install-crd" {
count = length(local.yamls)
manifest = local.yamls[count.index]
}
Unfortunately on my machine this then complains about
'status' attribute key is not allowed in manifest configuration
for exactly one of the 11 manifests.
And since I have no clue of kubernetes I have no idea what that means or wether or not it needs fixing.
Alternatively you can always use a null_resource with a script that fetches the yaml document and uses bash tools or python or whatever is installed to convert and split and filter the incoming yaml.
I got this to work using kubectl provider. Eventually kubernetes_manifest should work as well, but it is currently (v2.5.0) still beta and has some bugs. This example only uses kind+name, but for full uniqueness, it should also include the API and the namespace params.
resource "kubectl_manifest" "cdr" {
# Create a map { "kind--name" => yaml_doc } from the multi-document yaml text.
# Each element is a separate kubernetes resource.
# Must use \n---\n to avoid splitting on strings and comments containing "---".
# YAML allows "---" to be the first and last line of a file, so make sure
# raw yaml begins and ends with a newline.
# The "---" can be followed by spaces, so need to remove those too.
# Skip blocks that are empty or comments-only in case yaml began with a comment before "---".
for_each = {
for pair in [
for yaml in split(
"\n---\n",
"\n${replace(data.http.crd.body, "/(?m)^---[[:blank:]]*(#.*)?$/", "---")}\n"
) :
[yamldecode(yaml), yaml]
if trimspace(replace(yaml, "/(?m)(^[[:blank:]]*(#.*)?$)+/", "")) != ""
] : "${pair.0["kind"]}--${pair.0["metadata"]["name"]}" => pair.1
}
yaml_body = each.value
}
Once Hashicorp fixes kubernetes_manifest, I would recommend using the same approach. Do not use count+element() because if the ordering of the elements change, Terraform will delete/recreate many resources without needed it.
resource "kubernetes_manifest" "crd" {
for_each = {
for value in [
for yaml in split(
"\n---\n",
"\n${replace(data.http.crd.body, "/(?m)^---[[:blank:]]*(#.*)?$/", "---")}\n"
) :
yamldecode(yaml)
if trimspace(replace(yaml, "/(?m)(^[[:blank:]]*(#.*)?$)+/", "")) != ""
] : "${value["kind"]}--${value["metadata"]["name"]}" => value
}
manifest = each.value
}
P.S. Please support Terraform feature request for multi-document yamldecode. Will make things far easier than the above regex.
Terraform can split a multi-resource yaml (---) for you (docs):
# fetch a raw multi-resource yaml
data "http" "knative_serving_crds" {
url = "https://github.com/knative/serving/releases/download/knative-v1.7.1/serving-crds.yaml"
}
# split raw yaml into individual resources
data "kubectl_file_documents" "knative_serving_crds" {
content = data.http.knative_serving_crds.body
}
# apply each resource from the yaml one by one
resource "kubectl_manifest" "knative_serving_crds" {
depends_on = [kops_cluster_updater.updater]
for_each = data.kubectl_file_documents.knative_serving_crds.manifests
yaml_body = each.value
}

Terraform resource property dependent on creation of resource

Often, I've found myself in the scenario where I want to create a resource with Terraform and want to set, for example, an environment variable on this resource which is only known at a later stage, when the resource is created.
Let's say I want to create a google_cloud_run_service and want to set an environment variable in the container, that represents the url from which the app can be approached:
resource "google_cloud_run_service" "test_app" {
name = "test-app"
location = var.region
template {
spec {
containers {
image = "gcr.io/myimage:latest"
env {
name = "CURRENT_HOST"
value = google_cloud_run_service.test_app.status[0].url
}
}
}
}
}
This however is not allowed, as the service is not yet created. Is there a way to accomplish this?

Terraform updating a one of many ECS service/task

Happy Friday! hoping someone can help me with this issue or point out the flaws in my thinking.
$ terraform --version
Terraform v0.12.7
+ provider.aws v2.25.0
+ provider.template v2.1.2
Preface
This is my first time using Terraform. We have an existing AWS ECS/Fargate environment up and running, this is in a 'test' environment. We recently (e.g. after setting up the test env) started to use Terraform for IaC purposes.
Current Config
The environment has a single ECS cluster, we're using FARGATE but I'm not sure that matters for this question. The cluster has several services, each service has a single task (docker image) associated with it - so they can be individually scaled. Each docker image has its own Repo.
What I'm trying to do
So with Terraform I was hoping to be able to create, update and destroy the environment. Creating/destroying seems fairly straight forward, however; I'm hitting a road-block for updating.
As I said each task has its own repo, when a pull request is made against the repo our CI platform (CircleCI if that matters) builds the new docker image, tags it and pushes it. Then we use an API call to trigger a build of the Terraform Repo passing the name of the service/task that was updated.
Problem
The problem we're facing is that when going through the services (described below) I can't figure out how to get Terraform to either ignore the services that are not being updated, or how I can provide the correct container_definitions in the aws_ecs_task_definition, specifically the current image tag (we don't use the latest tag). So I'm trying to figure out how I can get the latest container information (tag) or just tell Terraform to skip the unmodified task.
Terraform Script
Here is a stripped down version of what I have tried, this is in a module called ecs.tf, the var.ecs_svc_names is a list of the service names. I have removed some elements as I don't think they pertain to this issue and having them makes this very large.
CAVEATS
I have not run the Terraform 'script' as shown below due to the issues I am asking about, so my syntax maybe a off. Sorry if that is the case, hopefully this will show you what I'm trying to do....
ecs.tf
/* ecs_service_names is passed in, but here is its definition:
variable "ecs_service_names" {
type = list(string)
description = "This is a list/array of the images/services that are contained in the cluster"
default = [
"main",
"sub-service1",
"sub-service2"]
}
*/
locals {
numberOfServices = length(var.ecs_svc_names)
}
resource "aws_ecs_cluster" "ecs_cluster" {
name = "${var.env_type}-ecs-cluster"
}
// Create the service objects (1 for each service)
resource "aws_ecs_service" "ecs-service" {
// How many services are being created
count = local.numberOfServices
name = var.ecs_svc_names[count.index]
cluster = aws_ecs_cluster.ecs_cluster.id
definition[count.index].family}:${max(aws_ecs_task_definition.ecs-task-definition[count.index].revision, data.aws_ecs_task_definition.ecs-task-def.revision)}"
desired_count = 1
launch_type = "FARGATE"
// stuff removed
}
resource "aws_ecs_task_definition" "ecs-task-definition" {
// How many tasks. There is a 1-1 relationship between tasks and services
count = local.numberOfServices
family = var.ecs_svc_names[count.index]
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
// cpu/memory stuff removed
task_role_arn = var.ecs_task_role_arn
container_definitions = data.template_file.ecs_containers_json[count.index].rendered
}
data.tf
locals {
numberOfServices = length(var.ecs_svc_names)
}
data "aws_ecs_task_definition" "ecs-task-def" {
// How many services are being created, 1-1 relationship between tasks and services
count = local.numberOfServices
task_definition = aws_ecs_task_definition.ecs-task-definition[count.index].family
depends_on = [
"aws_ecs_task_definition.ecs-task-definition",
]
}
data "template_file" "ecs_containers_json" {
// How many tasks. There is a 1-1 relationship between tasks and services
count = local.numberOfServices
template = file("${path.module}/container.json.template")
vars = {
// vars removed
image = aws_ecs_task_definition.ecs-task-definition[count.index].family
// This is where I hit the road-block, how do I get the current docker tag from Terraform?
tag = var.ecs_svc_name_2_update == var.ecs_svc_names[count.index]
? var.ecs_svc_image_tag
: data.aws_ecs_task_definition.ecs-task-def[count.index].
}
I didn't post the JSON document, if you need that I can provide it...
Thank you
It is necessary to pass the updated image attribute in the container definition of the task definition revision.
You can data source the container definition of the current task revision which is used by the service and pass it to the terraform. You may follow the code below.
data "template_file" "example" {
template = "${file("${path.module}/example.json")}"
vars {
image = "${data.aws_ecs_container_definition.example.image}"
}
}
resource "aws_ecs_task_definition" "example" {
family = "${var.project_name}-${var.environment_name}-example"
container_definitions = "${data.template_file.example.rendered}"
cpu = 192
memory = 512
}
data "aws_ecs_container_definition" "example" {
task_definition = "${var.project_name}-${var.environment_name}-example"
container_name = "example"
}

Does Terraform execute parallely in multi-cloud deployments?

I want to create a VMs in different cloud provider from a single Terraform script e.g. GCP, AWS, Azure using Terraform.
So, I wanted to know that, will Terraform make the VM instances in parallel in all public clouds?
Terraform builds a directed, acyclical graphic (also referred to as a DAG) to understand the dependencies between things. If something isn't dependent on something else then it will execute it in parallel up to the number specified by the -parallelism flag which defaults to 10.
If things are completely separate across multiple providers (you're just creating the same stack in n cloud providers) then it will be comfortably parallel across those stacks.
However, I'd recommend against applying multiple environments/cloud providers at the same time like this because of blast radius issues and in general erring towards minimising how much changes in one operation.
If you have cross provider dependencies then Terraform is great for handling this but it still relies on building that DAG so it can understand your dependencies.
For example you might want to create an instance in GCP and use DNS to resolve the IP address but use AWS' Route53 for all your DNS. For this you could use something like this:
resource "google_compute_instance" "test" {
name = "test"
machine_type = "n1-standard-1"
zone = "us-central1-a"
tags = ["foo", "bar"]
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
// Local SSD disk
scratch_disk {
}
network_interface {
network = "default"
access_config {
// Ephemeral IP
}
}
metadata = {
foo = "bar"
}
metadata_startup_script = "echo hi > /test.txt"
service_account {
scopes = ["userinfo-email", "compute-ro", "storage-ro"]
}
}
data "aws_route53_zone" "example" {
name = "example.com."
}
resource "aws_route53_record" "www" {
zone_id = "${data.aws_route53_zone.example.zone_id}"
name = "www.${data.aws_route53_zone.example.name}"
type = "A"
ttl = "300"
records = ["${google_compute_instance.test.network_interface.0.access_config.0.nat_ip}"]
}
This would build a graph that has the aws_route53_record.www depending on both the data.aws_route53_zone.example data source and also the google_compute_instance.test resource so Terraform knows that both of these need to complete before it can start work on the Route53 record.

Resources