Cloudinit file inside terraform config file not working - linux

I'm trying to run a cloudinit file by passing it in the terraform config file. The terraform apply command creates all the resources. But when i spin up the VM, none of the changes from the cloudinit are seen in the VM.
Here is the Cloudinit file with .tpl extension:
users:
- name: ansible
gecos: Ansible
sudo: ALL=(ALL) NOPASSWD:ALL
groups: [users, admin]
shell: /bin/bash
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1.......
And here is the main.tf file:
data "template_file" "users_data" {
template = file("./sshPass.tpl")
}
data "template_cloudinit_config" "config" {
gzip = true
base64_encode = true
part {
content_type = "text/cloud-config"
content = data.template_file.users_data.rendered
}
resource "azurerm_linux_virtual_machine" "poc-vm" {
name = var.vm_name
resource_group_name = azurerm_resource_group.poc_rg.name
location = azurerm_resource_group.poc_rg.location
size = var.virtual_machine_size
admin_username = var.vm_username
network_interface_ids = [azurerm_network_interface.poc_nic_1.id]
admin_ssh_key {
username = var.vm_username
public_key = tls_private_key.poc_key.public_key_openssh
}
os_disk {
caching = var.disk_caching
storage_account_type = var.storage_type
}
source_image_reference {
publisher = var.image_publisher
offer = var.image_offer
sku = var.image_sku
version = var.image_version
}
user_data = data.template_cloudinit_config.config.rendered
}

Try this:
data "template_cloudinit_config" "config" {
gzip = true
base64_encode = true
part {
content_type = "text/cloud-config"
content = "${data.template_file.users_data.rendered}"
}
In this example I change this line content = data.template_file.users_data.rendered for this one content = "${data.template_file.users_data.rendered}"
Hope this helps!

Found the errors. Changed the extension to '.cfg.'. Added 'custom_data' instead of user_data.
Added '#cloud-config' to the 1st line of the file.
Made sure I removed any spaces at end of my ssh key.
And also felt like I was using the wrong ssh key to login the whole time.
But anyways those things helped me.

Related

What to do if instance creating and cloud-config in separate session in terraform?

I was able to manually create this device instance in OpenStack, and now I am trying to see how to make it work by terraform.
Anyway, this device instance need to do a hard reboot after the volume attachment, and Any cloud-config needs be down after rebooting. Here is the general sketch of my current main.tf file.
# Configure the OpenStack Provider
terraform {
required_providers {
openstack = {
source = "terraform-provider-openstack/openstack"
}
}
}
data "template_file" "user_data"{
template = file("./userdata.yaml")
}
# create an instance
resource "openstack_compute_instance_v2" "server" {
name = "Device_Instance"
image_id = "xxx..."
image_name = "device_vmdk_1"
flavor_name = "m1.tiny"
key_pair = "my-keypair"
region = "RegionOne"
config_drive = true
network {
name = "main_network"
}
}
resource "openstack_blockstorage_volume_v2" "volume_2" {
name = "device_vmdk_2"
size = 1
image_id = "xxx...."
}
resource "openstack_blockstorage_volume_v3" "volume_3" {
name = "device_vmdk_3"
size = 1
image_id = "xxx..."
}
resource "openstack_compute_volume_attach_v2" "va_1" {
instance_id = "${openstack_compute_instance_v2.server.id}"
volume_id = "${openstack_blockstorage_volume_v2.volume_2.id}"
}
resource "openstack_compute_volume_attach_v2" "va_2" {
instance_id = "${openstack_compute_instance_v2.server.id}"
volume_id = "${openstack_blockstorage_volume_v3.volume_3.id}"
}
resource "null_resource" "reboot_instance" {
provisioner "local-exec" {
on_failure = fail
interpreter = ["/bin/bash", "-c"]
command = <<EOT
openstack server reboot --hard Device_Instance --insecure
echo "................"
EOT
}
depends_on = [openstack_compute_volume_attach_v2.va_1, openstack_compute_volume_attach_v2.va_2]
}
resource "openstack_compute_instance_v2" "server_config" {
name = "Device_Instance"
user_data = data.template_file.user_data.rendered
depends_on = [null_resource.reboot_instance]
}
As of now, it was able to:
have the "Device-Cloud-Instance" generated.
have the "Device-Cloud-Instance" hard-rebooted.
But it fails after rooting. As you may find, I have added this section in the end, but it seems not working.
resource "openstack_compute_instance_v2" "server_config"{}
Any ideas how to make it work?
Thanks,
Jack

Terraform default script path

I am creating multiple VMs in Azure using cloud-init, they are created in parallel and when any of them fails, I can see in the logs:
Error: error executing "/tmp/terraform_876543210.sh": Process exited with status 1
But I have no way to figure out which VM is failing, I need to ssh each of them and check
The script path seems to be defined for provisioning Terraform
Is there a way to override it also for cloud-init to something like: /tmp/terraform_vmName_876543210.sh ?
I am not using provisioner but cloud-init, any idea how I can force terraform to override the terraform sh file?
Below my script:
resource "azurerm_linux_virtual_machine" "example" {
name = "example-machine"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
size = "Standard_F2"
admin_username = "adminuser"
network_interface_ids = [
azurerm_network_interface.example.id,
]
admin_ssh_key {
username = "adminuser"
public_key = file("~/.ssh/id_rsa.pub")
}
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
source_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
custom_data = base64encode(templatefile(
"my-cloud-init.tmpl", {
var1 = "value1"
var2 = "value2"
})
)
}
And my cloud-init script:
## template: jinja
#cloud-config
runcmd:
- sudo /tmp/bootstrap.sh
write_files:
- path: /tmp/bootstrap.sh
permissions: 00700
content: |
#!/bin/sh -e
echo hello
From the code you found for the Terraform, it shows:
DefaultUnixScriptPath is used as the path to copy the file to for
remote execution on Unix if not provided otherwise.
It's the configuration for the remote execution. And for the remote execution of the SSH, you can set the source and the destination for the copy file in the provisioner "file".
But it's used to set the path in the remote VM, not the local machine that you execute the Terraform. And you can overwrite the file name like this:
provisioner "file" {
source = "conf/myapp.conf"
destination = "/etc/terraform_$(var.vmName).conf"
...
}

Read file and save output to local_file

I'm trying to read the content of a file on an azurerm_linux_virtual_machine and save it to a local_file so that an ansible playbook can reference it later. Currently the .tf looks like this
resource "azurerm_linux_virtual_machine" "vm" {
name = myvm
location = myzone
resource_group_name = azurerm_resource_group.azureansibledemo.name
network_interface_ids = [azurerm_network_interface.myterraformnic.id]
size = "Standard_DS1_v2"
os_disk {
name = "storage"
caching = "ReadWrite"
storage_account_type = "Premium_LRS"
}
source_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
computer_name = myvm
admin_username = "azureuser"
disable_password_authentication = true
custom_data = base64encode(file("telnet.sh"))
admin_ssh_key {
username = "azureuser"
public_key = tls_private_key.ansible_ssh_key.public_key_openssh
}
boot_diagnostics {
storage_account_uri = azurerm_storage_account.mystorageaccount.primary_blob_endpoint
}
}
output "myoutput" {
value = file("/tmp/output.yml")
}
resource "local_file" "testoutput" {
content = <<-DOC
${file("/tmp/output.yml")}
DOC
filename = "test.yml"
}
But when i run terraform plan i get the following error
Error: Invalid function argument
on main.tf line 181, in resource "local_file" "testoutput":
181: ${file("/tmp/output.yml")}
Invalid value for "path" parameter: no file exists at /tmp/output.yml; this
function works only with files that are distributed as part of the
configuration source code, so if this file will be created by a resource in
this configuration you must instead obtain this result from an attribute of
that resource.
The output myoutput is fine and returns no errors, this only occurs when i add in the resource local_file. Is there a way to get the output of a file to a local_file?
Copping remote files to local is not supported by TF.
The workaround is to use scp in local-exec as shown here. For example:
provisioner "local-exec" {
command = "scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i ${var.openstack_keypair} ubuntu#${openstack_networking_floatingip_v2.wr_manager_fip.address}:~/client.token ."
}

How can I use Terraform's file provisioner to copy from my local machine onto a VM?

I'm new to Terraform and have so far managed to get a basic VM (plus Resource Manager trimmings) up and running on Azure. The next task I have in mind is to have Terraform copy a file from my local machine into the newly created instance. Ideally I'm after a solution where the file will be copied each time the apply command is run.
I feel like I'm pretty close but so far I just get endless "Still creating..." statements once I apply (the file is 0kb so after a couple of mins it feels safe to give up).
So far, this is what I've got (based on this code): https://stackoverflow.com/a/37866044/4941009
Network
resource "azurerm_public_ip" "pub-ip" {
name = "PublicIp"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
public_ip_address_allocation = "Dynamic"
domain_name_label = "${var.hostname}"
}
VM
resource "azurerm_virtual_machine" "vm" {
name = "${var.hostname}"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
vm_size = "${var.vm_size}"
network_interface_ids = ["${azurerm_network_interface.nic.id}"]
storage_image_reference {
publisher = "${var.image_publisher}"
offer = "${var.image_offer}"
sku = "${var.image_sku}"
version = "${var.image_version}"
}
storage_os_disk {
name = "${var.hostname}osdisk1"
vhd_uri = "${azurerm_storage_account.stor.primary_blob_endpoint}${azurerm_storage_container.cont.name}/${var.hostname}osdisk.vhd"
os_type = "${var.os_type}"
caching = "ReadWrite"
create_option = "FromImage"
}
os_profile {
computer_name = "${var.hostname}"
admin_username = "${var.admin_username}"
admin_password = "${var.admin_password}"
}
os_profile_windows_config {
provision_vm_agent = true
}
boot_diagnostics {
enabled = true
storage_uri = "${azurerm_storage_account.stor.primary_blob_endpoint}"
}
tags {
environment = "${var.environment}"
}
}
File Provisioner
resource "null_resource" "copy-test-file" {
connection {
type = "ssh"
host = "${azurerm_virtual_machine.vm.ip_address}"
user = "${var.admin_username}"
password = "${var.admin_password}"
}
provisioner "file" {
source = "test.txt"
destination = "/tmp/test.txt"
}
}
As an aside, if I pass incorrect login details to the provisioner (ie rerun this after the VM has already been created and supply a different password to the provisioner) the behaviour is the same. Can anyone suggest where I'm going wrong?
I eventually got this working. Honestly, I kind of forgot about this question so I can't remember what my exact issue was, the example below though seems to work on my instance:
resource "null_resource" remoteExecProvisionerWFolder {
provisioner "file" {
source = "test.txt"
destination = "/tmp/test.txt"
}
connection {
host = "${azurerm_virtual_machine.vm.ip_address}"
type = "ssh"
user = "${var.admin_username}"
password = "${var.admin_password}"
agent = "false"
}
}
So it looks like the only difference here is the addition of agent = "false". This would make some sense as there's only one SSH authentication agent for windows and it's probable I hadn't explicitly specified to use that agent before. However it could well be that I ultimately changed something elsewhere in the configuration. Sorry future people for not being much help on this one.
FYI, windows instances you can connect via type winrm
resource "null_resource" "provision_web" {
connection {
host = "${azurerm_virtual_machine.vm.ip_address}"
type = "winrm"
user = "alex"
password = "alexiscool1!"
}
provisioner "file" {
source = "path/to/folder"
destination = "C:/path/to/destination"
}
}

Multiple user_data File use in Terraform

I am trying to have a common user_data file for common tasks such as folder creation and certain package install and a separate user_data file for application specific configuration
I am trying the below -
user_data = "${data.template_file.userdata_common.rendered}", "${data.template_file.userdata_master.rendered}"
With these configs -
Common User Data Template
data "template_file" "userdata_common" {
template = "${file("${path.module}/userdata_common.sh")}"
vars {
"ALBTarget" = "${var.ALBTarget}"
"s3bucket" = "${var.s3bucket}"
"centrifydomain" = "${lookup(var.centrifydomain, format("%s-%s", lower(var.env),var.region))}"
"centrifyadgroup" = "${lookup(var.centrifyadgroup, format("%s-%s", lower(var.env),var.region))}"
}
}
Application Specific Config
data "template_file" "userdata_master" {
template = "${file("${path.module}/userdata_master.sh")}"
vars {
"ALBTarget" = "${var.ALBTarget}"
"s3bucket" = "${var.s3bucket}"
"centrifydomain" = "${lookup(var.centrifydomain, format("%s-%s", lower(var.env),var.region))}"
"centrifyadgroup" = "${lookup(var.centrifyadgroup, format("%s-%s", lower(var.env),var.region))}"
}
}
I get the below Error when i do Plan -
Failed to load root config module: Error parsing /terraform/main.tf: key ${data.template_file.userdata_common.rendered}"' expected start of object ('{') or assignment ('=')
Is this possible using Terraform (0.9.3)?
If not, what's the best way to do this with Terraform?
Did you try template_cloudinit_config?
Add below codes.
data "template_cloudinit_config" "master" {
gzip = true
base64_encode = true
# get common user_data
part {
filename = "common.cfg"
content_type = "text/part-handler"
content = "${data.template_file.userdata_common.rendered}"
}
# get master user_data
part {
filename = "master.cfg"
content_type = "text/part-handler"
content = "${data.template_file.userdata_master.rendered}"
}
}
# sample code to use it.
resource "aws_instance" "web" {
ami = "ami-d05e75b8"
instance_type = "t2.micro"
user_data = "${data.template_cloudinit_config.master.rendered}"
}
Let me know if it works.
You can use "provisioner" to modify the infrastructure you are creating using the Terraform, Here is the example from them https://www.terraform.io/intro/getting-started/provision.html

Resources