Read file and save output to local_file - azure

I'm trying to read the content of a file on an azurerm_linux_virtual_machine and save it to a local_file so that an ansible playbook can reference it later. Currently the .tf looks like this
resource "azurerm_linux_virtual_machine" "vm" {
name = myvm
location = myzone
resource_group_name = azurerm_resource_group.azureansibledemo.name
network_interface_ids = [azurerm_network_interface.myterraformnic.id]
size = "Standard_DS1_v2"
os_disk {
name = "storage"
caching = "ReadWrite"
storage_account_type = "Premium_LRS"
}
source_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
computer_name = myvm
admin_username = "azureuser"
disable_password_authentication = true
custom_data = base64encode(file("telnet.sh"))
admin_ssh_key {
username = "azureuser"
public_key = tls_private_key.ansible_ssh_key.public_key_openssh
}
boot_diagnostics {
storage_account_uri = azurerm_storage_account.mystorageaccount.primary_blob_endpoint
}
}
output "myoutput" {
value = file("/tmp/output.yml")
}
resource "local_file" "testoutput" {
content = <<-DOC
${file("/tmp/output.yml")}
DOC
filename = "test.yml"
}
But when i run terraform plan i get the following error
Error: Invalid function argument
on main.tf line 181, in resource "local_file" "testoutput":
181: ${file("/tmp/output.yml")}
Invalid value for "path" parameter: no file exists at /tmp/output.yml; this
function works only with files that are distributed as part of the
configuration source code, so if this file will be created by a resource in
this configuration you must instead obtain this result from an attribute of
that resource.
The output myoutput is fine and returns no errors, this only occurs when i add in the resource local_file. Is there a way to get the output of a file to a local_file?

Copping remote files to local is not supported by TF.
The workaround is to use scp in local-exec as shown here. For example:
provisioner "local-exec" {
command = "scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i ${var.openstack_keypair} ubuntu#${openstack_networking_floatingip_v2.wr_manager_fip.address}:~/client.token ."
}

Related

Is version mandatory while creating an Azure VM using terraform?

So I have been working with terraform since last 3 weeks and have been trying to use it to create self hosted GitHub Actions runners in our Azure account.
We have a shared windows VM image in Azure Compute Gallery that I'm planning to use as base image for the GA runner. I have noticed that these shared windows VM images do not generally have any versions attached to them they just have a publisher, offer and SKU attached.
I also verified by creating a new image from a VM to check if somebody missed attaching the version to the VM, but no shared images do not really have a version attached.
Yeah they do have versions but it is not attached as it is for Microsoft Platform Images.
Example of a shared image:
Now I found that in terraform, runners can be created by using both: azurerm_windows_virtual_machine and azurerm_virtual_machine resources.
I used both of them to test the runner creation, below are the terraform code used:
data "azurerm_shared_image" "win19_gold_image" {
provider = azurerm.gi
name = "Windows-2019_base"
gallery_name = data.azurerm_shared_image_gallery.cap_win_gold_image_gallery.name
resource_group_name = "gi-rg"
}
resource "azurerm_virtual_machine" "win_runners_gold_image_based" {
provider = azurerm.og
name = "ga-win-gold-1"
location = "East US"
count = "1" # if I need to increase the number of VMs.
resource_group_name = data.azurerm_resource_group.dts_rg.name
network_interface_ids = [azurerm_network_interface.azure_win_runner_gold_nic[count.index].id,]
vm_size = "Standard_D4ads_v5"
delete_os_disk_on_termination = true
delete_data_disks_on_termination = true
storage_image_reference {
publisher = data.azurerm_shared_image.win19_gold_image.identifier[0].publisher
offer = data.azurerm_shared_image.win19_gold_image.identifier[0].offer
sku = data.azurerm_shared_image.win19_gold_image.identifier[0].sku
# Here I get the error: Error: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidParameter" Message="The value of parameter imageReference.version is invalid." Target="imageReference.version"
}
storage_os_disk {
name = "ga-win-gold-os-disk-1"
caching = "None"
create_option = "FromImage"
managed_disk_type = "StandardSSD_LRS"
}
os_profile {
computer_name = "ga-win-gold-1"
admin_username = "svc"
admin_password = var.WINDOWS_ADMIN_PASS
}
os_profile_windows_config {
enable_automatic_upgrades = true
provision_vm_agent = true
}
storage_data_disk {
name = "ga-win-gold-data-disk-1"
caching = "None"
create_option = "Empty"
disk_size_gb = var.disk_size_gb
lun = 0
managed_disk_type = "StandardSSD_LRS"
}
}
OR
data "azurerm_shared_image" "win19_gold_image" {
provider = azurerm.gi
name = "Windows-2019_base"
gallery_name = data.azurerm_shared_image_gallery.cap_win_gold_image_gallery.name
resource_group_name = "gi-rg"
}
resource "azurerm_windows_virtual_machine" "azure_win_runner" {
provider = azurerm.og
name = "vm-github-actions-win-${count.index}"
resource_group_name = data.azurerm_resource_group.dts_rg.name
location = "East US"
size = var.windows-vm-size
count = "${var.number_of_win_az_instances}"
network_interface_ids = [
azurerm_network_interface.azure_win_runner_nic[count.index].id,
]
computer_name = "vm-ga-win-${count.index}"
admin_username = var.windows-admin-username
admin_password = var.WINDOWS_ADMIN_PASS
os_disk {
name = "vm-github-actions-win-${count.index}-os-disk"
caching = "None"
storage_account_type = "StandardSSD_LRS"
}
source_image_reference {
publisher = data.azurerm_shared_image.win19_gold_image.identifier[0].publisher
offer = data.azurerm_shared_image.win19_gold_image.identifier[0].offer
sku = data.azurerm_shared_image.win19_gold_image.identifier[0].sku
version = data.azurerm_shared_image.win19_gold_image.identifier[0].version # says this object does not have a version attached to it.
# or version = "latest" or any other correct version string will throw error at time of apply that such a version does not exist.
}
enable_automatic_updates = true
provision_vm_agent = true
}
If I'm using azurerm_virtual_machine then if I ignore the version in storage_image_reference I receive the error:
Error: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidParameter" Message="The value of parameter imageReference.version is invalid." Target="imageReference.version"
And if I add the version then I receive the error
Error: Unsupported attribute.
This object does not have an attribute named "version".
When using azurerm_windows_virtual_machine if I remove the version argument terraform complains that version is required and when provided a sting such as 1.0.0 or latest, while applying(terraform apply) it would complain that such a version does not exist.
And if I pull the version from data.azurerm_shared_image.cap_win19_gold_image it would complain that this object does not have a version.
I am confused as to how to use shared images for VM creation using terraform if version is mandatory yet if version is not available for azure shared images. Please advise on what am I missing?
Any help would be appreciated.
Thanks,
Sekhar
It seems to get a version of the image you need to use another resource [1] and another data source [2]:
data "azurerm_image" "win19_gold_image" {
name = "Windows-2019_base"
resource_group_name = "gi-rg"
}
resource "azurerm_shared_image_version" "win19_gold_image" {
name = "0.0.1"
gallery_name = data.azurerm_shared_image.win19_gold_image.gallery_name
image_name = data.azurerm_shared_image.win19_gold_image.name
resource_group_name = data.azurerm_shared_image.win19_gold_image.resource_group_name
location = data.azurerm_shared_image.win19_gold_image.location
managed_image_id = data.azurerm_image.win19_gold_image.id
}
And then in the source_image_reference block in the azurerm_windows_virtual_machine resource:
source_image_reference {
publisher = data.azurerm_shared_image.win19_gold_image.identifier[0].publisher
offer = data.azurerm_shared_image.win19_gold_image.identifier[0].offer
sku = data.azurerm_shared_image.win19_gold_image.identifier[0].sku
version = azurerm_shared_image_version.win19_gold_image.name
}
As it seems the name argument is actually the version of the image [3]:
name - (Required) The version number for this Image Version, such as 1.0.0. Changing this forces a new resource to be created.
[1] https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/data-sources/shared_image_version
[2] https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/data-sources/image
[3] https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/shared_image_version#name
Hi All who come across this question,
I found the solution to my issue. All I had to do was define a azurerm_shared_image_version data and then use source_image_id in azurerm_windows_virtual_machine in place of source_image_reference{} block.
Below is what I did:
data "azurerm_shared_image_gallery" "win_gold_image_gallery" {
provider = azurerm.gi
name = "golden_image_gallery"
resource_group_name = "gi-rg"
}
data "azurerm_shared_image" "win19_gold_image" {
provider = azurerm.gi
name = "Windows-2019_base"
gallery_name = data.azurerm_shared_image_gallery.win_gold_image_gallery.name
resource_group_name = data.azurerm_shared_image_gallery.win_gold_image_gallery.resource_group_name
}
data "azurerm_shared_image_version" "win19_gold_image_version" {
provider = azurerm.gi
name = "latest" # "recent" is also a tag to use the most recent image version
image_name = data.azurerm_shared_image.win19_gold_image.name
gallery_name = data.azurerm_shared_image.win19_gold_image.gallery_name
resource_group_name = data.azurerm_shared_image.win19_gold_image.resource_group_name
}
resource "azurerm_windows_virtual_machine" "azure_win_gi_runner" {
provider = azurerm.dep
name = "vm-github-actions-win-gi-${count.index}"
resource_group_name = data.azurerm_resource_group.dts_rg.name
location = "East US"
size = var.windows-vm-size
count = "${var.number_of_win_gi_az_instances}"
network_interface_ids = [
azurerm_network_interface.azure_win_gi_runner_nic[count.index].id,
]
computer_name = "ga-win-gi-${count.index}"
admin_username = var.windows-admin-username
admin_password = var.WINDOWS_ADMIN_PASS
os_disk {
name = "vm-github-actions-win-gi-${count.index}-os-disk"
caching = "None"
storage_account_type = "StandardSSD_LRS"
}
source_image_id = data.azurerm_shared_image_version.win19_gold_image_version.id
# This is the thing I was missing.
enable_automatic_updates = true
provision_vm_agent = true
tags = {
whichVM = var.gh_windows_runner
environment = var.environment
}
}

Terraform default script path

I am creating multiple VMs in Azure using cloud-init, they are created in parallel and when any of them fails, I can see in the logs:
Error: error executing "/tmp/terraform_876543210.sh": Process exited with status 1
But I have no way to figure out which VM is failing, I need to ssh each of them and check
The script path seems to be defined for provisioning Terraform
Is there a way to override it also for cloud-init to something like: /tmp/terraform_vmName_876543210.sh ?
I am not using provisioner but cloud-init, any idea how I can force terraform to override the terraform sh file?
Below my script:
resource "azurerm_linux_virtual_machine" "example" {
name = "example-machine"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
size = "Standard_F2"
admin_username = "adminuser"
network_interface_ids = [
azurerm_network_interface.example.id,
]
admin_ssh_key {
username = "adminuser"
public_key = file("~/.ssh/id_rsa.pub")
}
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
source_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
custom_data = base64encode(templatefile(
"my-cloud-init.tmpl", {
var1 = "value1"
var2 = "value2"
})
)
}
And my cloud-init script:
## template: jinja
#cloud-config
runcmd:
- sudo /tmp/bootstrap.sh
write_files:
- path: /tmp/bootstrap.sh
permissions: 00700
content: |
#!/bin/sh -e
echo hello
From the code you found for the Terraform, it shows:
DefaultUnixScriptPath is used as the path to copy the file to for
remote execution on Unix if not provided otherwise.
It's the configuration for the remote execution. And for the remote execution of the SSH, you can set the source and the destination for the copy file in the provisioner "file".
But it's used to set the path in the remote VM, not the local machine that you execute the Terraform. And you can overwrite the file name like this:
provisioner "file" {
source = "conf/myapp.conf"
destination = "/etc/terraform_$(var.vmName).conf"
...
}

Terraform Azure run bash script on VM

I'm trying to run a bash script on an Azure VM after deploying it with Terraform. I've tried different approaches but none of them have worked. With "custom_data", I assumed that the file will be uploaded and executed, however I'm not even seeing the file inside the VM.
I've also looked at "azurerm_virtual_machine_extension", but this does not give me the option to upload the file, only to execute commands or download from remote location (can't use fileUris due to requirements):
resource "azurerm_virtual_machine_extension" "test" {
name = "hostname"
location = "${azurerm_resource_group.test.location}"
resource_group_name = "${azurerm_resource_group.test.name}"
virtual_machine_name = "${azurerm_virtual_machine.test.name}"
publisher = "Microsoft.Azure.Extensions"
type = "CustomScript"
type_handler_version = "2.0"
settings = <<SETTINGS
{
"commandToExecute": "sh my_script.sh"
}
SETTINGS
tags = {
environment = "Production"
}
}
resource "azurerm_virtual_machine" "middleware_vm" {
name = "${var.middleware_vm}"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.middleware.name}"
network_interface_ids = ["${azurerm_network_interface.middleware.id}"]
vm_size = "Standard_DS4_v2"
storage_os_disk {
name = "centos_os_disk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
}
storage_data_disk {
name = "managed_backup_disk"
create_option = "Empty"
caching = "ReadWrite"
disk_size_gb = "256"
managed_disk_type = "Premium_LRS"
lun = 0
}
storage_image_reference {
publisher = "OpenLogic"
offer = "CentOS"
sku = "7.5"
version = "latest"
}
os_profile {
computer_name = "${var.middleware_vm}"
admin_username = "middlewareadmin"
custom_data = "${file("scripts/middleware_disk.sh")}"
}
In azurerm_virtual_machine_extension, you can use:
protected_settings = <<PROTECTED_SETTINGS
{
"script": "${base64encode(file(var.scfile))}"
}
PROTECTED_SETTINGS
Please refer to my answer
First, the VM extension will just execute the script and do not copy the file to the VM. If you want to copy the script into the VM and then execute it. I will suggest you the Terraform provisioner file and remote-exec.
Here is the example that copies the file into the existing VM and executes the script:
resource "null_resource" "example" {
connection {
type = "ssh"
user = "azureuser"
password = "azureuser#2018"
host = "13.92.255.50"
port = 22
}
provisioner "file" {
source = "script.sh"
destination = "/tmp/script.sh"
}
provisioner "remote-exec" {
inline = [
"/bin/bash /tmp/script.sh"
]
}
}
Note: the script should be created in the current directory.

How can I use Terraform's file provisioner to copy from my local machine onto a VM?

I'm new to Terraform and have so far managed to get a basic VM (plus Resource Manager trimmings) up and running on Azure. The next task I have in mind is to have Terraform copy a file from my local machine into the newly created instance. Ideally I'm after a solution where the file will be copied each time the apply command is run.
I feel like I'm pretty close but so far I just get endless "Still creating..." statements once I apply (the file is 0kb so after a couple of mins it feels safe to give up).
So far, this is what I've got (based on this code): https://stackoverflow.com/a/37866044/4941009
Network
resource "azurerm_public_ip" "pub-ip" {
name = "PublicIp"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
public_ip_address_allocation = "Dynamic"
domain_name_label = "${var.hostname}"
}
VM
resource "azurerm_virtual_machine" "vm" {
name = "${var.hostname}"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.rg.name}"
vm_size = "${var.vm_size}"
network_interface_ids = ["${azurerm_network_interface.nic.id}"]
storage_image_reference {
publisher = "${var.image_publisher}"
offer = "${var.image_offer}"
sku = "${var.image_sku}"
version = "${var.image_version}"
}
storage_os_disk {
name = "${var.hostname}osdisk1"
vhd_uri = "${azurerm_storage_account.stor.primary_blob_endpoint}${azurerm_storage_container.cont.name}/${var.hostname}osdisk.vhd"
os_type = "${var.os_type}"
caching = "ReadWrite"
create_option = "FromImage"
}
os_profile {
computer_name = "${var.hostname}"
admin_username = "${var.admin_username}"
admin_password = "${var.admin_password}"
}
os_profile_windows_config {
provision_vm_agent = true
}
boot_diagnostics {
enabled = true
storage_uri = "${azurerm_storage_account.stor.primary_blob_endpoint}"
}
tags {
environment = "${var.environment}"
}
}
File Provisioner
resource "null_resource" "copy-test-file" {
connection {
type = "ssh"
host = "${azurerm_virtual_machine.vm.ip_address}"
user = "${var.admin_username}"
password = "${var.admin_password}"
}
provisioner "file" {
source = "test.txt"
destination = "/tmp/test.txt"
}
}
As an aside, if I pass incorrect login details to the provisioner (ie rerun this after the VM has already been created and supply a different password to the provisioner) the behaviour is the same. Can anyone suggest where I'm going wrong?
I eventually got this working. Honestly, I kind of forgot about this question so I can't remember what my exact issue was, the example below though seems to work on my instance:
resource "null_resource" remoteExecProvisionerWFolder {
provisioner "file" {
source = "test.txt"
destination = "/tmp/test.txt"
}
connection {
host = "${azurerm_virtual_machine.vm.ip_address}"
type = "ssh"
user = "${var.admin_username}"
password = "${var.admin_password}"
agent = "false"
}
}
So it looks like the only difference here is the addition of agent = "false". This would make some sense as there's only one SSH authentication agent for windows and it's probable I hadn't explicitly specified to use that agent before. However it could well be that I ultimately changed something elsewhere in the configuration. Sorry future people for not being much help on this one.
FYI, windows instances you can connect via type winrm
resource "null_resource" "provision_web" {
connection {
host = "${azurerm_virtual_machine.vm.ip_address}"
type = "winrm"
user = "alex"
password = "alexiscool1!"
}
provisioner "file" {
source = "path/to/folder"
destination = "C:/path/to/destination"
}
}

SSH connection to Azure VM with Terraform

I have successfully created a VM as part of a Resource Group on Azure using Terraform. Next step is to ssh in the new machine and run a few commands. For that, I have created a provisioner as part of the VM resource and set up an SSH connection:
resource "azurerm_virtual_machine" "helloterraformvm" {
name = "terraformvm"
location = "West US"
resource_group_name = "${azurerm_resource_group.helloterraform.name}"
network_interface_ids = ["${azurerm_network_interface.helloterraformnic.id}"]
vm_size = "Standard_A0"
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "14.04.2-LTS"
version = "latest"
}
os_profile {
computer_name = "hostname"
user = "some_user"
password = "some_password"
}
os_profile_linux_config {
disable_password_authentication = false
}
provisioner "remote-exec" {
inline = [
"sudo apt-get install docker.io -y"
]
connection {
type = "ssh"
user = "some_user"
password = "some_password"
}
}
}
If I run "terraform apply", it seems to get into an infinite loop trying to ssh unsuccessfully, repeating this log over and over:
azurerm_virtual_machine.helloterraformvm (remote-exec): Connecting to remote host via SSH...
azurerm_virtual_machine.helloterraformvm (remote-exec): Host:
azurerm_virtual_machine.helloterraformvm (remote-exec): User: testadmin
azurerm_virtual_machine.helloterraformvm (remote-exec): Password: true
azurerm_virtual_machine.helloterraformvm (remote-exec): Private key: false
azurerm_virtual_machine.helloterraformvm (remote-exec): SSH Agent: true
I'm sure I'm doing something wrong, but I don't know what it is :(
EDIT:
I have tried setting up this machine without the provisioner, and I can SSH to it no problems with the given username/passwd. However I need to look up the host name in the Azure portal because I don't know how to retrieve it from Terraform. It's suspicious that the "Host:" line in the log is empty, so I wonder if it has anything to do with that?
UPDATE:
I've tried with different things like indicating the host name in the connection with
host = "${azurerm_public_ip.helloterraformip.id}"
and
host = "${azurerm_public_ip.helloterraformips.ip_address}"
as indicated in the docs, but with no success.
I've also tried using ssh-keys instead of password, but same result - infinite loop of connection tries, with no clear error message as of why it's not connecting.
I have managed to make this work. I changed several things:
Gave name of host to connection.
Configured SSH keys properly - they need to be unencrypted.
Took the connection element out of the provisioner element.
Here's the full working Terraform file, replacing the data like SSH keys, etc.:
# Configure Azure provider
provider "azurerm" {
subscription_id = "${var.azure_subscription_id}"
client_id = "${var.azure_client_id}"
client_secret = "${var.azure_client_secret}"
tenant_id = "${var.azure_tenant_id}"
}
# create a resource group if it doesn't exist
resource "azurerm_resource_group" "rg" {
name = "sometestrg"
location = "ukwest"
}
# create virtual network
resource "azurerm_virtual_network" "vnet" {
name = "tfvnet"
address_space = ["10.0.0.0/16"]
location = "ukwest"
resource_group_name = "${azurerm_resource_group.rg.name}"
}
# create subnet
resource "azurerm_subnet" "subnet" {
name = "tfsub"
resource_group_name = "${azurerm_resource_group.rg.name}"
virtual_network_name = "${azurerm_virtual_network.vnet.name}"
address_prefix = "10.0.2.0/24"
#network_security_group_id = "${azurerm_network_security_group.nsg.id}"
}
# create public IPs
resource "azurerm_public_ip" "ip" {
name = "tfip"
location = "ukwest"
resource_group_name = "${azurerm_resource_group.rg.name}"
public_ip_address_allocation = "dynamic"
domain_name_label = "sometestdn"
tags {
environment = "staging"
}
}
# create network interface
resource "azurerm_network_interface" "ni" {
name = "tfni"
location = "ukwest"
resource_group_name = "${azurerm_resource_group.rg.name}"
ip_configuration {
name = "ipconfiguration"
subnet_id = "${azurerm_subnet.subnet.id}"
private_ip_address_allocation = "static"
private_ip_address = "10.0.2.5"
public_ip_address_id = "${azurerm_public_ip.ip.id}"
}
}
# create storage account
resource "azurerm_storage_account" "storage" {
name = "someteststorage"
resource_group_name = "${azurerm_resource_group.rg.name}"
location = "ukwest"
account_type = "Standard_LRS"
tags {
environment = "staging"
}
}
# create storage container
resource "azurerm_storage_container" "storagecont" {
name = "vhd"
resource_group_name = "${azurerm_resource_group.rg.name}"
storage_account_name = "${azurerm_storage_account.storage.name}"
container_access_type = "private"
depends_on = ["azurerm_storage_account.storage"]
}
# create virtual machine
resource "azurerm_virtual_machine" "vm" {
name = "sometestvm"
location = "ukwest"
resource_group_name = "${azurerm_resource_group.rg.name}"
network_interface_ids = ["${azurerm_network_interface.ni.id}"]
vm_size = "Standard_A0"
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
storage_os_disk {
name = "myosdisk"
vhd_uri = "${azurerm_storage_account.storage.primary_blob_endpoint}${azurerm_storage_container.storagecont.name}/myosdisk.vhd"
caching = "ReadWrite"
create_option = "FromImage"
}
os_profile {
computer_name = "testhost"
admin_username = "testuser"
admin_password = "Password123"
}
os_profile_linux_config {
disable_password_authentication = false
ssh_keys = [{
path = "/home/testuser/.ssh/authorized_keys"
key_data = "ssh-rsa xxx email#something.com"
}]
}
connection {
host = "sometestdn.ukwest.cloudapp.azure.com"
user = "testuser"
type = "ssh"
private_key = "${file("~/.ssh/id_rsa_unencrypted")}"
timeout = "1m"
agent = true
}
provisioner "remote-exec" {
inline = [
"sudo apt-get update",
"sudo apt-get install docker.io -y",
"git clone https://github.com/somepublicrepo.git",
"cd Docker-sample",
"sudo docker build -t mywebapp .",
"sudo docker run -d -p 5000:5000 mywebapp"
]
}
tags {
environment = "staging"
}
}
According to your description, Azure Custom Script Extension is an option for you.
The Custom Script Extension downloads and executes scripts on Azure
virtual machines. This extension is useful for post deployment
configuration, software installation, or any other configuration /
management task.
Remove provisioner "remote-exec" instead of below:
resource "azurerm_virtual_machine_extension" "helloterraformvm" {
name = "hostname"
location = "West US"
resource_group_name = "${azurerm_resource_group.helloterraformvm.name}"
virtual_machine_name = "${azurerm_virtual_machine.helloterraformvm.name}"
publisher = "Microsoft.OSTCExtensions"
type = "CustomScriptForLinux"
type_handler_version = "1.2"
settings = <<SETTINGS
{
"commandToExecute": "apt-get install docker.io -y"
}
SETTINGS
}
Note: Command is executed by root user, don't use sudo.
More information please refer to this link: azurerm_virtual_machine_extension.
For a list of possible extensions, you can use the Azure CLI command az vm extension image list -o table
Update: The above example only supports single command. If you need to multiple commands. Like install docker on your VM, you need
apt-get update
apt-get install docker.io -y
Save it as a file named script.sh and save it to Azure Storage account or GitHub(The file should be public). Modify terraform file like below:
settings = <<SETTINGS
{
"fileUris": ["https://gist.githubusercontent.com/Walter-Shui/dedb53f71da126a179544c91d267cdce/raw/bb3e4d90e3291530570eca6f4ff7981fdcab695c/script.sh"],
"commandToExecute": "sh script.sh"
}
SETTINGS

Resources