I want to launch a 5 VM and as soon as launch it will save the IP of that VM in a file
this is the high level idea of what I want to do I want to launch 5 instance and save all IP in a single VM.
I think here template_file will work but i am not sure how to implement this scenario
i tried
#!/bin/bash
touch myip.txt
private_ip=$(google_compute_instance.default.network_interface.0.network_ip)
echo "$private_ip" >> /tmp/ip.sh
resource "null_resource" "coderunner" {
provisioner "file" {
source = "autoo.sh"
destination = "/tmp/autoo.sh"
connection {
host = google_compute_address.static.address
type = "ssh"
user = var.user
private_key = file(var.privatekeypath)
}
}
connection {
host = google_compute_address.static.address
type = "ssh"
user = var.user
private_key = file(var.privatekeypath)
}
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/autoo.sh",
"sh /tmp/autoo.sh",
]
}
depends_on = ["google_compute_instance.default"]
}
but it is not working as soon as script run through an error as
null_resource.coderunner (remote-exec): /tmp/autoo.sh: line 3: google_compute_instance.default.network_interface.0.network_ip: command not found
There are 2 kinds of template files. One is template_file which is data resouce and the other one is templatefile which is a function.
Template_file is used when you have some file you want to transfer from your machine to provisioning instance and change some paramaters according to that machine. For example:
data "template_file" "temp_file" {
template = file("template.yaml")
vars = {
"local_ip" = "my_local_ip"
}
}
( if you want more detail explanation of what I did in this example just post in the comments but I think it's not for your use case )
This is good because you can change this file for each instance you have if you iterate over it with count for example.
As you can see this template doesn't do what you want to do. It's completely different thing.
To do what you want to it's best to use 2 provisioners:
1st you can use file provisioner to copy the script ( which executes for example ip a, along with some other paramaters to cut and filter only for the data you need )
and 2nd one, remote-exec which will execute that script.
Instead of using depend_on, if you use remote-exec provisioner it's best to use sleep command. Sleep hold you on for given amount of time and let's your instance start properly. You need to choose the right amount of sleep time depending on the size and speed of your instance but I usually do 30 seconds.
I hope I understand your question correctly and hope it helped with something.
Related
I am trying to create a single GCP Workflows using Terraform (Terraform Workflows documentation here). To create a workflow, I have defined the desired steps and order of execution using the Workflows syntax in YAML (can also be JSON).
I have around 20 different jobs and each of theses jobs are on different .yaml files under the same folder, workflows/. I just want to loop over the /workflows folder and have a single .yaml file to be able to create my resource. What would be the best way to achieve this using Terraform? I read about for_each but it was primarily used to loop over something to create multiple resources rather than a single resource.
workflows/job-1.yaml
- getCurrentTime:
call: http.get
args:
url: https://us-central1-workflowsample.cloudfunctions.net/datetime
result: currentDateTime
workflows/job-2.yaml
- readWikipedia:
call: http.get
args:
url: https://en.wikipedia.org/w/api.php
query:
action: opensearch
search: ${currentDateTime.body.dayOfTheWeek}
result: wikiResult
main.tf
resource "google_workflows_workflow" "example" {
name = "workflow"
region = "us-central1"
description = "Magic"
service_account = google_service_account.test_account.id
source_contents = YAML FILE HERE
Terraform has a function fileset which allows a configuration to react to files available on disk alongside its definition. You can use this as a starting point for constructing a suitable expression for for_each:
locals {
workflow_files = fileset("${path.module}/workflows", "*.yaml")
}
It looks like you'd also need to specify a separate name for each workflow, due to the design of the remote system, and so perhaps you'd decide to set the name to be the same as the filename but with the .yaml suffix removed, like this:
locals {
workflows = tomap({
for fn in local.workflow_files :
substr(fn, 0, length(fn)-5) => "${path.module}/workflows/${fn}"
})
}
This uses a for expression to project the set of filenames into a map from workflow name (trimmed filename) to the path to the specific file. The result then would look something like this:
{
job-1 = "./module/workflows/job-1.yaml"
job-2 = "./module/workflows/job-2.yaml"
}
This now meets the requirements for for_each, so you can refer to it directly as the for_each expression:
resource "google_workflows_workflow" "example" {
for_each = local.workflows
name = each.key
region = "us-central1"
description = "Magic"
service_account = google_service_account.test_account.id
source_contents = file(each.value)
}
Your question didn't include any definition for how to populate the description argument, so I've left it set to hard-coded "Magic" as in your example. In order to populate that with something reasonable you'd need to have an additional data source for that, since what I wrote above is already making full use of the information we get just from scanning the content of the directory.
resource "google_workflows_workflow" "example" {
# count for total iterations
count = 20
name = "workflow"
region = "us-central1"
description = "Magic"
service_account = google_service_account.test_account.id
# refer to file using index, index starts from 0
source_contents = file("${path.module}/workflows/job-${each.index}.yaml")
}
Goal:
Reserve a static ip (permanent) for ever in gcp console: e.g "ip-drupal-1"
In terraform submodule "./module_drupal" make use of that 'ip-drupal-1'
When 'terraform destroy' is invoked, the the ip-drupal-1 must stay reserved in gcp. If static-ip gets destroyed, this will generate another new one, and I have to update DNS records.
The below procedure did not achieve that goal. Is there any sample code out there?
I added a "terraform import -var-file="main.tfvars" google_compute_address.ip-drupal-1 ip-drupal-1",
so it imports that static ip each time I invoke that shellscript.
How to avoid this error : "to import to this address, you must first remove ..."
To specifically address this maybe add a terraform state rm followed by the object id right before the import.
See this for info about terraform state rm.
Depending on how you are handling your automation that might work.
Here is what I ended up doing
In the infra-update.sh, I added 'terraform state rm' before importing.
infra-createorupdate.sh
terraform init
terraform state list
STATIC_IP_NAME="ip-base"
terraform state rm "google_compute_address.${STATIC_IP_NAME}"
terraform import -var-file="main.tfvars" google_compute_address.${STATIC_IP_NAME} ${STATIC_IP_NAME}
read -s -n 1 -p "Press any key to continue . . ."
terraform plan -var-file="main.tfvars" -out plan.out
#echo -ne '\007'
read -s -n 1 -p "Press any key to APPLY .. Press Ctrl C to abort"
echo ""
terraform apply plan.out
#echo -ne '\007'
Then in my root main.tf I defined the address as prevent_destroy = true
main.tf
resource "google_compute_address" "ip-base" { # see terraform import in deploy.sh
name = "ip-base"
lifecycle {
prevent_destroy = true #DO NOT DELETE STATIC-IP
}
}
# Call vm_micro
module "vm-base" {
source = "./_module-vm-micro"
vm_instance_name = "base"
custom_static_ip = google_compute_address.ip-base.address
#from predefined
#CPU/RAM
vm_size = var.org_micro
#where
deploy_env = var.deploy_env
zone = var.zone
#disk
boot_image = var.disk_image_coscloud
disk_type = var.disk_hdd
disk_size = var.disk_20_gb
#login
login_key_file = var.ssh_pubkey_file
login_user = var.ssh_username
}
Another alternative:
Perhaps I can also save money by not using static-ip. Instead, in the boot-startupscript.sh, I could make an Dynamic DNS api call to update the dynamic api to dns. I've to experiment with that, hope there is some example code.
https://support.google.com/domains/answer/6147083?hl=en#zippy=%2Cusing-the-api-to-update-your-dynamic-dns-record
I am trying to read from Terraform the instance public_IP and the instance_name and then write them into a file in the same line.
Whit the next command, I write the next file:
provisioner "local-exec" {
command = "echo \"${join("\n", aws_instance.nodeStream.*.public_ip)}\" >> ../ouput_file"
}
output_file:
34.14.219.13
64.2.201.14
59.12.31.15
What I want is to have the next output_file:
34.14.219.13 instance_name1
64.2.201.14 instance_name2
59.12.31.15 instance_name3
So I have try the next to concat both lists:
provisioner "local-exec" {
command = "echo \"${concat(sort(lookup(aws_instance.node1Stream.*.tags, "Name")), sort(aws_instance.node1Stream.*.public_ip))}\" >> ../../output_file"
}
The previous throws:
Error: Invalid function argument: Invalid value for "inputMap" parameter: lookup() requires a map as the first argument.
Since your goal is to produce a string from a data structure, this seems like a good use for string templates:
locals {
hosts_file_content = <<EOT
%{ for inst in aws_instance.node1Stream ~}
${inst.private_ip} ${inst.tags["Name"]}
%{ endfor ~}
EOT
}
With that local value defined, you can include it in the command argument of the provisioner like this:
provisioner "local-exec" {
command = "echo '${local.hosts_file_content}' >> ../../output_file"
}
If just getting that data into a file is your end goal, and that wasn't just a contrived example for the sake of this question, I'd recommend using the local_file resource instead so that Terraform can manage that file like any other resource, including potentially updating it if the inputs change without the need for any special provisioner triggering:
resource "local_file" "hosts_file" {
filename = "${path.root}/../../output_file"
content = <<EOT
%{ for inst in aws_instance.node1Stream ~}
${inst.private_ip} ${inst.tags["Name"]}
%{ endfor ~}
EOT
}
With that said, the caveat on the local_file documentation page applies both to this resource-based approach and the provisioner-based approach: Terraform is designed primarily for managing remote objects that can persist from one Terraform run to the next, not for objects that live only on the system where Terraform is currently running. Although these features do allow creating and modifying local files, it'll be up to you to make sure that the previous file is consistently available at the same location relative to the Terraform configuration next time you apply a change, or else Terraform will see the file gone and be forced to recreate it.
I have written a Terraform script to create a few Azure Virtual Machines.
The number of VMs created is based upon a variable called type in my .tfvars file:
type = [ "Master-1", "Master-2", "Master-3", "Slave-1", "Slave-2", "Slave-3" ]
My variables.tf file contains the following local:
count_of_types = "${length(var.type)}"
And my resources.tf file contains the code required to actual create the relevant number of VMs from this information:
resource "azurerm_virtual_machine" "vm" {
count = "${local.count_of_types}"
name = "${replace(local.prefix_specific,"##TYPE##",var.type[count.index])}-VM"
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
network_interface_ids = ["${azurerm_network_interface.main.*.id[count.index]}"]
vm_size = "Standard_B2ms"
tags = "${local.tags}"
Finally, in my output.tf file, I output the IP address of each server:
output "public_ip_address" {
value = ["${azurerm_public_ip.main.*.ip_address}"]
}
I am creating a Kubernetes cluster with 1x Master and 1x Slave VM. For this purpose, the script works fine - the first IP output is the Master and the second IP output is the Slave.
However, when I move to 8+ VMs in total, I'd like to know which IP refers to which VM.
Is there a way of amending my output to include the type local, or just the server's hostname alongside the Public IP?
E.g. 54.10.31.100 // Master-1.
Take a look at formatlist (which is one of the functions for string manipulations) and can be used to iterate over the instance attributes and list tags and other attributes of interest.
output "ip-address-hostname" {
value = "${
formatlist(
"%s:%s",
azurerm_public_ip.resource_name.*.fqdn,
azurerm_public_ip.resource_name.*.ip_address
)
}"
}
Note this is just a draft pseudo code. You may have to tweak this and create additional data sources in your TF file for effective enums
More reading available - https://www.terraform.io/docs/configuration/functions/formatlist.html
Raunak Jhawar's answer pointed me in the right direction, and therefore got the green tick.
For reference, here's the exact code I used in the end:
output "public_ip_address" {
value = "${formatlist("%s: %s", azurerm_virtual_machine.vm.*.name, azurerm_public_ip.main.*.ip_address)}"
}
This resulted in the following output:
I am not able to target a single aws_volume_attachment with its corresponding aws_instance via -target.
The problem is that the aws_instance is taken from a list by using count.index, which forces terraform to refresh all aws_instance resources from that list.
In my concrete case I am trying to manage a consul cluster with terraform.
The goal is to be able to reinit a single aws_instance resource via the -target flag, so I can upgrade/change the whole cluster node by node without downtime.
I have the following tf code:
### IP suffixes
variable "subnet_cidr" { "10.10.0.0/16" }
// I want nodes with addresses 10.10.1.100, 10.10.1.101, 10.10.1.102
variable "consul_private_ips_suffix" {
default = {
"0" = "100"
"1" = "101"
"2" = "102"
}
}
###########
# EBS
#
// Get existing data EBS via Name Tag
data "aws_ebs_volume" "consul-data" {
count = "${length(keys(var.consul_private_ips_suffix))}"
filter {
name = "volume-type"
values = ["gp2"]
}
filter {
name = "tag:Name"
values = ["${var.platform_type}.${var.platform_id}.consul.data.${count.index}"]
}
}
#########
# EC2
#
resource "aws_instance" "consul" {
count = "${length(keys(var.consul_private_ips_suffix))}"
...
private_ip = "${cidrhost(aws_subnet.private-b.cidr_block, lookup(var.consul_private_ips_suffix, count.index))}"
}
resource "aws_volume_attachment" "consul-data" {
count = "${length(keys(var.consul_private_ips_suffix))}"
device_name = "/dev/sdh"
volume_id = "${element(data.aws_ebs_volume.consul-data.*.id, count.index)}"
instance_id = "${element(aws_instance.consul.*.id, count.index)}"
}
This works perfectly fine for initializing the cluster.
Now I make a change in my user_data init script of the consul nodes and want to rollout node by node.
I run terraform plan -target=aws_volume_attachment.consul_data[0] to reinit node 0.
This is when I run into the above mentioned problem, that terraform renders all aws_instance resources because of instance_id = "${element(aws_instance.consul.*.id, count.index)}".
Is there a way to "force" tf to target a single aws_volume_attachment with only its corresponding aws_instance resource?
At the time of writing this sort of usage is not possible due to the fact that, as you've seen, an expression like aws_instance.consul.*.id creates a dependency on all the instances, before the element function is applied.
The -target option is not intended for routine use and is instead provided only for exceptional circumstances such as recovering carefully from an unintended change.
For this specific situation it may work better to use the ignore_changes lifecycle setting to prevent automatic replacement of the instances when user_data changes, like this:
resource "aws_instance" "consul" {
count = "${length(keys(var.consul_private_ips_suffix))}"
...
private_ip = "${cidrhost(aws_subnet.private-b.cidr_block, lookup(var.consul_private_ips_suffix, count.index))}"
lifecycle {
ignore_changes = ["user_data"]
}
}
With this set, Terraform will detect but ignore changes to the user_data attribute. You can then get the gradual replacement behavior you want by manually tainting the resources one at a time:
$ terraform taint aws_instance.consul[0]
On the next plan, Terraform will then see that this resource instance is tainted and produce a plan to replace it. This gives you direct control over when the resources are replaced, so you can therefore ensure that e.g. the consul leave step gets a chance to run first, or whatever other cleanup you need to do.
This workflow is recommended over -target because it makes the replacement step explicit. -target can be confusing in a collaborative environment because there is no evidence of its use, and thus no clear explanation of how the current state was reached. taint, on the other hand, explicitly marks your intention in the state where other team members can see it, and then replaces the resource via the normal plan/apply steps.