I am trying to configure a VPC peering between my project network and another project using GCP, however I can't because I don't have permissions to list networks on the other project.
resource "google_compute_network" "my-network" {
name = "foobar"
auto_create_subnetworks = "false"
}
resource "google_compute_network_peering" "my-network" {
name = "peering1"
network = "${google_compute_network.my-network.self_link}"
peer_network = "${data.google_compute_network.another-network.self_link}"
}
data "google_compute_network" "another-network" {
name = "another"
project = "another-project"
}
The error:
Error 403: Required 'compute.networks.get' permission for 'projects/another-project/global/networks/another', forbidden
Since terraform doesn't have access to another-project I would like to know if there is any other way to do this with terraform.
Thank you in advance! :)
If you know the name of the peer network just add the location:
peer_network = "projects/PEER_PROJECT/global/networks/PEER_NETWORK"
You have to use the selflink like that:
peer_network = "https://www.googleapis.com/compute/v1/projects/Peer_Project_ID/global/networks/Peer_network_name"
Change Peer_Project_ID and Peer_network_name with the right value
Related
I've built an AD directory with Terraform in AWS but SecurityHub recently pointed out that the SG it created has a bunch of ports wide open with 0.0.0.0/0. Thankfully, I have it in a VPC for internal subnets only, but this is definitely not a great practice and I'd rather set the SG inbound CIDRs to my local VPC network range. Is that possible to change? I don't see a way to get to the SG, other than get its ID.
This is how I created it:
resource "aws_directory_service_directory" "ad" {
name = local.ad_hostname
short_name = "CORP"
password = random_password.ad_admin_password.result
edition = "Standard"
type = "MicrosoftAD"
vpc_settings {
vpc_id = local.vpc_id
subnet_ids = slice(local.pvt_subnets, 0, 2)
}
}
I am using Helm chart provisioned by Terraform which creates Network Load Balancer, but I do not know how to get DNS name of this balancer so I can create Route53 records in Terraform for it.
If I can get it's ARN, I can call it over data block and read dns_name, however there is nothing like thit that Helm can return for me.
Do you have any suggestions?
I would like to keep it as IaC as possible
PS: I am passing some values to Helm chart so it's creating NLB, native functionality of this Chart is to create Classic LB.
service.beta.kubernetes.io/aws-load-balancer-type: nlb
I just found and answer, it's simple using:
Note: I had to specify namespace, otherwise was service null (not found).
data "kubernetes_service" "ingress_nginx" {
metadata {
name = "ingress-nginx-controller"
namespace = "kube-system"
}
}
output "k8s_service_ingress" {
description = "External DN name of load balancer"
value = data.kubernetes_service.ingress_nginx.status.0.load_balancer.0.ingress.0.hostname
}
It can be found in official docs too - https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/data-sources/service
I had to use kubernetes_ingress_v1 so to create a Route 53 entry for the ingress hostname:
data "kubernetes_ingress_v1" "this" {
metadata {
name = "ingress-myservice"
namespace = "myservice"
}
depends_on = [
module.myservice-eks
]
}
resource "aws_route53_record" "this" {
zone_id = local.route53_zone_id
name = "whatever.myservice.com"
type = "CNAME"
ttl = "300"
records = [data.kubernetes_ingress_v1.this.status.0.load_balancer.0.ingress.0.hostname]
}
I’m trying to create network security group with multiple security rules in one script and virtual network along with five subnets in one script.
For that, I have referred azurerm_virtual_network and azurerm_subnet_network_security_group_association documentations.
The above documentation contains the code with hardcode values. But I want to use loop concept to create subnets inside virtual network, security rules inside network security group and then associate each subnet with network security group.
Thanks in advance for the help !
In order to "loop" you can use the for_each = var.value method and instead of placing the values within the Main.tf file, you can instead use a .tfvars file to loop through the # of resources.
As this is quite advanced, you would be better off dissecting/reusing something that's already available. Take a look at the Azurerm subnet modules from Claranet, available at the modules page on the Terraform website (and there are a ton more to explore!). Here's how you would define the nsgs, vnet and subnets in the locals, at a glance:
locals {
network_security_group_names = ["nsg1", "nsg2", "nsg3"]
vnet_cidr = "10.0.1.0/24"
subnets = [
{
name = "subnet1"
cidr = ["10.0.1.0/26"]
service_endpoints = ["Microsoft.Storage", "Microsoft.KeyVault", "Microsoft.ServiceBus", "Microsoft.Web"]
nsg_name = local.network_security_group_names[0]
vnet_name = module.azure-network-vnet.virtual_network_name
},
{
name = "subnet2"
cidr = ["10.0.1.64/26"]
service_endpoints = ["Microsoft.Storage", "Microsoft.KeyVault", "Microsoft.ServiceBus", "Microsoft.Web"]
nsg_name = local.network_security_group_names[2]
vnet_name = module.azure-network-vnet.virtual_network_name
}
]
}
I am creating a two VPC deployment. Both VPCs are being deployed via modules. In module VPC01 I am defining the dhcp options using:
resource "aws_vpc_dhcp_options" "dhcp_domain_name" {
domain_name = var.domain_name
tags = {
Name = var.domain_name
Creator = var.creator_name
}
}
resource "aws_vpc_dhcp_options_association" "dns_resolver" {
vpc_id = aws_vpc.infra-vpc.id
dhcp_options_id = aws_vpc_dhcp_options.dhcp_domain_name.id
}
So this will define the vpc dhcp options when this vpc is deployed. Now when I want to deploy my send vpc how do I associate this vpc with the same vpc dhcp option set created?
I was trying to use:
resource "aws_vpc_dhcp_options_association" "dns_resolver" {
vpc_id = aws_vpc.infra-vpc.id
dhcp_options_id = aws_vpc_dhcp_options.dhcp_domain_name.id
}
When I do this I get this error:
Error: Reference to undeclared resource
on modules/vpc-intapp/infra-vpc.tf line 173, in resource "aws_vpc_dhcp_options_association" "dns_resolver":
173: dhcp_options_id = aws_vpc_dhcp_options.dhcp_domain_name.id
A managed resource "aws_vpc_dhcp_options" "dhcp_domain_name" has not been
declared in module.vpc-intapp.
I need to somehow get the value of the vpc dhcp option into my second module. So how do I go about this?
Modules can't directly reference resources created in other modules. If you want a resource to be shared/referenced by multiple modules, you either need to create it outside the modules, and pass it as an input variable to both modules, or you need to define it as an output from one module, and pass it as an input to the other module.
Since you've already created the first set of resources, I would go with the second option. Add this to the first VPC module:
output "vpc_dhcp_options_id" {
value = aws_vpc_dhcp_options.dhcp_domain_name.id
}
Add this to the second VPC module:
variable "vpc_dhcp_options_id" {}
And change the second VPC module to use the module's input variable:
resource "aws_vpc_dhcp_options_association" "dns_resolver" {
vpc_id = aws_vpc.infra-vpc.id
dhcp_options_id = var.vpc_dhcp_options_id
}
Finally, pass the output value from the first module as an input value to the second module:
module "my_first_vpc" {
source = "..."
}
module "my_second_vpc" {
source = "..."
vpc_dhcp_options_id = module.my_first_vpc.vpc_dhcp_options_id
}
I have created a gcp kubernetes cluster using terraform and configured a few kubernetes resources such as namespaces and helm releases. I would like terraform to automatically destroy/recreate all the kubernetes cluster resources if the gcp cluster is destroyed/recreated but I cant seem to figure out how to do it.
The behavior I am trying to recreate is similar to what you would get if you used triggers with null_resources. Is this possible with normal resources?
resource "google_container_cluster" "primary" {
name = "marcellus-wallace"
location = "us-central1-a"
initial_node_count = 3
resource "kubernetes_namespace" "example" {
metadata {
annotations = {
name = "example-annotation"
}
labels = {
mylabel = "label-value"
}
name = "terraform-example-namespace"
#Something like this, but this only works with null_resources
triggers {
cluster_id = "${google_container_cluster.primary.id}"
}
}
}
In your specific case, you don't need to specify any explicit dependencies. They will be set automatically because you have cluster_id = "${google_container_cluster.primary.id}" in your second resource.
In case when you need to set manual dependency you can use depends_on meta-argument.