I want to build a 3 tier VPC using only AZ1a and AZ1b. But I want to have a range of IP addresses and use them for each tier but need them to always fall in AZ1a or AZ1b.
I have 3 app tiers (web, app, db):
Using only two AZs.
availability_zones = ["us-east-1a", "us-east-1b"]
I have put together a list of the subnets I want to use. SoI want web to grab the first two and put into each of the AZs, then I want app to grab the next two and put into each of the AZs, and so on on.
private_subnets_cidr_intapp = ["10.211.130.0/28", "10.211.130.16/28", "10.211.130.32", "10.211.130.48", "10.211.130.64/28", "10.211.130.80/28"]
How do you go about this, and an example would help. Should I be using a map?
Related
Im having an issue using terraform workspaces between a region pair. One workspace for South Central US and one for North Central US. Everything works great until it comes to zones. Public IP for example. South I’m setting zones, North won’t accept zone configuration empty, 0, or 1,2, or 3 because it doesn’t support zones yet. Hoping to set zones in SCU and eventually doing the same in NCU when they become available.
How do I use the same code in both regions? I know I can use workspace vars for values, but in this case it is an entire line of code. Seems like there should be an easy answer I’m just not thinking of.
I asked this question differently and got the answer I was looking for. Instead of trying to put a value in the zones config line, just put null. TF omits that line and sends it. Simple and it worked.
Here's an example of my variables.tfvars
firewalla-AvailabilityZone = {
hub-ncu = null
hub-scu = [1]
}
firewallb-AvailabilityZone = {
hub-ncu = null
hub-scu = [3]
}
Then in the resource:
zones = var.firewalla-AvailabilityZone[terraform.workspace]
I'm trying to create separate modules(git repos) to create for several azure resources using terraform.
For example I want to create module-1 which will create aks cluster service and default node pool. I want to create separate module-2 which will create user node pools. So is there anyway I can import values from module-1 like aks cluster id, network id etc by just giving cluster name or any other identifier? If I use 'source' I had to give values everytime and default values are not same always for all the clusters. I don't want to hardcode the ids in tf files. Ofcourse I will use same state file.
So basically in terraform is it possible to get values from another azure resource which is created in terraform.
thanks,
Santosh
Using terraform_remote_state data resource
One way you can achieve this is by leveraging the terraform_remote_state data element.
On your module-1 main tf, output any attribute that other modules or repos would use. Then, pull that information you need on module-2 main via the data “terraform_remote_state” resource by providing the location of the state file used by module-1 .
Using a single main tf
The way I would approach this scenario is by defining outputs from module-1, and have those outputs passed in as parameters to module-2.
For example, you could have something like the following:
On your main tf file
module "module-1" {
source = "/path/to/module-1"
... < some parameters to module-1 ...>
}
module "module-2" {
source = "/path/to/module-2"
... < some parameters to module-2 ...>
aks-cluster-id = module.module-1.aks-cluster-id
}
Add the following output to module-1
output "aks-cluster-id" {
# Replace this with the proper resource and attribute based on how your cluster is created
value = azure.aks_cluster.aks-cluster-id
description = "AKS Cluster ID"
}
Terraform's documentation is actually pretty good and provides some useful examples. https://www.terraform.io/language/values/outputs
I am new in terraform, I have problem while creating ec2 instance, my condition is, I am passing private_Ips values from variable file (like IP pool) and creating two ec2 instance so when I run terraform apply first time it creates two ec2 instances, but when second time I run terraform apply it says IP already in use, its not going to take 3rd and 4th IP. I want on 2nd run it for it to take 3rd and 4th IPs. Below are my definitions. Could you please suggest:
var.tf
variable "private_ips" {
default = {
"0" = "x.x.x.x"
"1" = "x.x.x.x"
"2" = "x.x.x.x"
"3" = "x.x.x.x"
}
}
main.tf
private_ip = "${lookup(var.private_ips,count.index)}"
Terraform is a stateful tool. So, whenever it creates some pieces of infrastructure it keeps track of them (in what is called Terraform state). The whole idea is that if you run terraform apply the first time it creates some infrastructure. And any subsequent run just updates whatever has been created before (with whatever changes were applied in .tf files).
You might want to read up on workspaces whose idea is about using the same configuration (.tf files) against multiple independent copies of target infrastructure. Typically used for dev/test/prod kind of setups. It might be what you are after.
Following this v0.11 example from the official documentation, I had to make minor changes to it to make it work with v0.12 (provider.vsphere is v1.11.0).
resource "vsphere_vmfs_datastore" "datastore" {
name = "test"
host_system_id = "${data.vsphere_host.esxi_host.id}"
disks = data.vsphere_vmfs_disks.available.disks
}
This, however, creates a single datastore comprised of all discovered volume. What I want is one new datastore per each discovered volumes.
I tried to use count = 2 in above resource; with 2 volumes that attempts to create 2 datastores (the good), but each each still comprises the both volumes (the bad).
vsphere_vmfs_datastore should count the number of volumes returned by vsphere_vmfs_disks (so that I don't have to set it), loop through the list and create one datastore on each, which makes me think this resource section should be inside of a loop and each datastore would assign a unique name and use data.vsphere_vmfs_disks.available.disks.[N] but I don't know how to do that in Terraform 0.12 (there are relatively few examples and still some bugs).
Would the following work for you? It still uses the count method but passes in the count index to select a disk inside of the data.vsphere_vmfs_disks.available.disks data source.
resource "vsphere_vmfs_datastore" "datastore" {
count = 2
name = "test${count.index}"
host_system_id = data.vsphere_host.esxi_host.id
disks = [
data.vsphere_vmfs_disks.available.disks[count.index]
]
}
Unfortunately I don't have an ESXi host I can test against, but the logic should still apply.
I would like to solve the following issue:
agent based model with a population of 500 agents
each agent gets assigned with an ID number using a variable called v_agentID using the order v_agentID++; after being created
The agent should then be further processed based on a condition monitoring the individual waiting time
How can I assign individual attributes like waiting times (as a result of the calculation waitingTime=waitingTimeEnd-waitingTimeStart)to each individual agent?
Thanks a lot for your help.
Bastian
Many ways:
1) create a cyclical event on the individual agent that calculates waitingTime with the formula you provided
2) create a dynamic variable for each agent and make it equal to waitingTimeEnd-waitingTimeStart
3) create the variable whenever you want and change it in all the agents:
for(Agent a : agents){
a.waitingTime=a.waitingTimeEnd-a.waitingTimeStart;
}
4) Find the agent with the id you want and assign the variable to it
Agent theAgent=findFirst(agents,a->a.id=theIdYouWant);
theAgent.waitingTime=theAgent.waitingTimeEnd-theAgent-waitingTimeStart;
5) If you know the index of the agent just do
agents.get(index).waitingTime=agents.get(index).waitingTimeEnd-agents.get(index).waitingTimeStart