I am using Terraform to deploy an Azure VM. I want to explore the option of pre-installing a bunch of tools like the azure cli on the VM once it is created in the cloud.
Can someone help me with an example on how that can be achieved?
My current terraform script looks like:
resource "azurerm_linux_virtual_machine" "main" {
name = "trainingVM-1"
resource_group_name = data.azurerm_resource_group.current.name
location = data.azurerm_resource_group.current.location
size = "Standard_B2s"
admin_username = "vmsysuser2"
admin_password = "Training123!"
disable_password_authentication = false
network_interface_ids = [
azurerm_network_interface.linux.id,
]
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
source_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
identity {
type = "SystemAssigned, UserAssigned"
identity_ids = [azurerm_user_assigned_identity.uai.id]
}
}
1/ You need to add this to your azure VM resource block:
custom_data = filebase64("azure_cli.tpl")
2/Then create the file "azure_cli.tpl" with the linux instructions to install Azure CLI:
sudo apt-get update
sudo apt-get install azure-cli -y
Related
I am trying to provision an an azure virtual machine and I need to install kubectl on it. So I am using a bash script and pass it to the VM's custom_data section.
Everything works fine - it provisions the VM and install the kubectl.
But the problem is, if I do some modifications to the the bash script, and do terraform apply, it show no changes as it doesn't detect that I did the changes to the bash script.
Here's my code snippet. Can someone help me understand this?
locals {
admin_username = "myjumphost"
admin_password = "mypassword!610542"
}
resource "azurerm_virtual_machine" "vm" {
name = var.vm_name
location = var.location
resource_group_name = var.rg_name
network_interface_ids = var.nic_id
vm_size = var.vm_size
delete_os_disk_on_termination = true
delete_data_disks_on_termination = true
storage_image_reference {
publisher = var.storage_image_reference.publisher
offer = var.storage_image_reference.offer
sku = var.storage_image_reference.sku
version = var.storage_image_reference.version
}
storage_os_disk {
name = var.storage_os_disk.name
caching = var.storage_os_disk.caching
create_option = var.storage_os_disk.create_option
managed_disk_type = var.storage_os_disk.managed_disk_type
}
os_profile {
computer_name = var.vm_name
admin_username = local.admin_username
admin_password = local.admin_password
custom_data = file("${path.module}/${var.custom_data_script}")
}
os_profile_linux_config {
disable_password_authentication = true
ssh_keys {
key_data = file("${path.module}/${var.ssh_public_key}")
path = "/home/${local.admin_username}/.ssh/authorized_keys"
}
}
tags = merge(var.common_tags)
}
any my script install.sh
#!/bin/bash
# install Kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
kubectl version --client --output=yaml > /tmp/kubectl_version.yaml
# Install Azure CLI
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
Try the refresh command; it was backwards compatible. Usually, it won't be necessary because the plan will execute the same refresh. Most confusion will be caused by bash or PowerShell scripts that are run as cloud-init script on backed and implement the functionality; we can cross-check that in activity logs.
terraform apply -refresh-only -auto-approve
I have tried to replicate the same with below mentioned code base
main tf file as follows
resource "azurerm_resource_group" "example" {
name = "v-swarna-mindtree"
location = "Germany West Central"
}
data "azuread_client_config" "current" {}
resource "azurerm_virtual_network" "puvnet" {
name = "Public_VNET"
resource_group_name = azurerm_resource_group.example.name
location = "Germany West Central"
address_space = ["10.19.0.0/16"]
dns_servers = ["10.19.0.4", "10.19.0.5"]
}
resource "azurerm_subnet" "osubnet" {
name = "Outer_Subnet"
resource_group_name = azurerm_resource_group.example.name
address_prefixes = ["10.19.1.0/24"]
virtual_network_name = azurerm_virtual_network.puvnet.name
}
resource "azurerm_network_interface" "main" {
name = "testdemo"
location = "Germany West Central"
resource_group_name = azurerm_resource_group.example.name
ip_configuration {
name = "testconfiguration1"
subnet_id = azurerm_subnet.osubnet.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_virtual_machine" "main" {
name = "vmjumphost"
location = "Germany West Central"
resource_group_name = azurerm_resource_group.example.name
network_interface_ids = [azurerm_network_interface.main.id]
//vm_size = "Standard_A1_v2"
vm_size ="Standard_DS2_v2"
storage_image_reference {
offer = "0001-com-ubuntu-server-focal"
publisher = "Canonical"
sku = "20_04-lts-gen2"
version = "latest"
}
storage_os_disk {
name = "myosdisk2"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "vm-swarnademo"
admin_username = "testadmin"
admin_password = "Password1234!"
// custom_data = file("${path.module}/${var.custom_data_script}")
custom_data = file("install.sh")
}
os_profile_linux_config {
disable_password_authentication = false
# ssh_keys {
# key_data = file("${path.module}/${var.ssh_public_key}")
# path = "/home/${local.admin_username}/.ssh/authorized_keys"
# }
}
tags = {
environment = "staging"
}
}
install sh file as follows
#!/bin/bash
# install Kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
kubectl version --client --output=yaml > /tmp/kubectl_version.yaml
#testing by adding command -start
#sudo apt-get -y update
#testing by adding command -End
#Install Azure CLI
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
Trying to update the script with below line of command
#testing by adding command -start
#sudo apt-get -y update
#testing by adding command -End
Step1:
while implement plan and apply, it was created all resources on portal.
Step2:
Updated the script by enable the code base
#testing by adding command -start
sudo apt-get -y update
#testing by adding command -End
upon running plan and apply, it will refresh the state of the virtual machine and all the changes
Verification from Activity Log:
I am trying to deploy a CIS Ubuntu image in Azure using Terraform, but am getting the following error:
│ Error: Code="VMMarketplaceInvalidInput" Message="Unable to deploy from the Marketplace image or a custom image sourced from Marketplace image. The part number in the purchase information for VM '/xxx' is not as expected. Beware that the Plan object's properties are case-sensitive. "
My terraform resource looks like so (changed the names for brevity):
resource "azurerm_virtual_machine" "vm" {
name = "name"
location = "East US"
resource_group_name = "name"
network_interface_ids = [azurerm_network_interface.nic.id]
vm_size = "Standard_D8_v3"
delete_os_disk_on_termination = true
storage_image_reference {
offer = "cis-ubuntu-linux-2004-l1"
publisher = "center-for-internet-security-inc"
sku = "cis-ubuntu2004-l1povw-jan-2022"
version = "1.1.9"
}
plan {
name = "cis-ubuntu2004-l1"
publisher = "center-for-internet-security-inc"
product = "cis-ubuntu-linux-2004-l1"
}
storage_os_disk {
name = "name"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "name"
admin_username = "username"
admin_password = "password"
}
os_profile_linux_config {
disable_password_authentication = false
}
}
I have accepted the terms via the CLI and get the following when querying:
$ az vm image terms show --offer "cis-ubuntu-linux-2004-l1" --plan "cis-ubuntu2004-l1" --publisher "center-for-internet-security-inc"
{
"accepted": true,
...
Not sure where I am going wrong?
Thakn You for confirming the the issue got resolve by changing the correct sku name.Please find the same which i have tested in my enviroment.
You can check the List the VM image based on publishers available in the Azure Marketplace.
az vm image list --output table --all --publisher center-for-internet-security-inc.
In the above picture For version 1.1.9 sku is cis-ubuntu2004-l1povw not cis-ubuntu2004-l1povw-jan-2022.
Refernce for url : https://learn.microsoft.com/en-us/cli/azure/vm/image/terms?view=azure-cli-latest
I would like to know the terraform script for provisioning azure Virtual machine scale set along with custom data or cloud init.
I tried many ways to run my script against VMSS but its not working.As per my understanding during provisioning of VMSS I should run some shell scripts so that It can install necessary agents (New relic) into all VMSS instances.
Looking for terraform script for VMSS along with custom data or cloudinit configuration.
I used this one a while ago:
resource "azurerm_linux_virtual_machine_scale_set" "jumpserver" {
name = "${local.prefix}-jumpservers-vmss"
resource_group_name = azurerm_resource_group.deployment.name
location = azurerm_resource_group.deployment.location
sku = "Standard_B2s"
instances = 2
overprovision = false
single_placement_group = false
admin_username = "adminuser"
admin_password = azurerm_key_vault_secret.vmsecret.value
disable_password_authentication = false
custom_data = base64encode(data.local_file.cloudinit.content)
source_image_reference {
publisher = "canonical"
offer = "0001-com-ubuntu-server-focal"
sku = "20_04-lts"
version = "latest"
}
os_disk {
storage_account_type = "Standard_LRS"
caching = "ReadWrite"
}
network_interface {
name = "${local.prefix}-jumpserver-vmss-nic"
primary = true
ip_configuration {
name = "${local.prefix}-jumpserver-vmss-ipconfig"
primary = true
subnet_id = azurerm_subnet.jumpservers_vmss.id
}
}
boot_diagnostics {
storage_account_uri = null
}
}
# Data template cloud-init bootstrapping file used by the VMSS
data "local_file" "cloudinit" {
filename = "${path.module}/cloudinit.conf"
}
cloudinit.conf
#cloud-config
bootcmd:
- mkdir -p /etc/systemd/system/walinuxagent.service.d
- echo "[Unit]\nAfter=cloud-final.service" > /etc/systemd/system/walinuxagent.service.d/override.conf
- sed "s/After=multi-user.target//g" /lib/systemd/system/cloud-final.service > /etc/systemd/system/cloud-final.service
- systemctl daemon-reload
package_update: true
package_upgrade: true
# Add external package sources, e.g. for Microsoft packages and Kubernetes
apt:
preserve_sources_list: true
sources_list: |
deb $MIRROR $RELEASE main restricted
deb-src $MIRROR $RELEASE main restricted
deb $PRIMARY $RELEASE universe restricted
deb $SECURITY $RELEASE-security multiverse
sources:
microsoft-azurecli.list:
source: "deb https://packages.microsoft.com/repos/azure-cli focal main"
key: |
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1.4.7 (GNU/Linux)
mQENBFYxWIwBCADAKoZhZlJxGNGWzqV+1OG1xiQeoowKhssGAKvd+buXCGISZJwT
LXZqIcIiLP7pqdcZWtE9bSc7yBY2MalDp9Liu0KekywQ6VVX1T72NPf5Ev6x6DLV
7aVWsCzUAF+eb7DC9fPuFLEdxmOEYoPjzrQ7cCnSV4JQxAqhU4T6OjbvRazGl3ag
OeizPXmRljMtUUttHQZnRhtlzkmwIrUivbfFPD+fEoHJ1+uIdfOzZX8/oKHKLe2j
H632kvsNzJFlROVvGLYAk2WRcLu+RjjggixhwiB+Mu/A8Tf4V6b+YppS44q8EvVr
M+QvY7LNSOffSO6Slsy9oisGTdfE39nC7pVRABEBAAG0N01pY3Jvc29mdCAoUmVs
ZWFzZSBzaWduaW5nKSA8Z3Bnc2VjdXJpdHlAbWljcm9zb2Z0LmNvbT6JATUEEwEC
AB8FAlYxWIwCGwMGCwkIBwMCBBUCCAMDFgIBAh4BAheAAAoJEOs+lK2+EinPGpsH
/32vKy29Hg51H9dfFJMx0/a/F+5vKeCeVqimvyTM04C+XENNuSbYZ3eRPHGHFLqe
MNGxsfb7C7ZxEeW7J/vSzRgHxm7ZvESisUYRFq2sgkJ+HFERNrqfci45bdhmrUsy
7SWw9ybxdFOkuQoyKD3tBmiGfONQMlBaOMWdAsic965rvJsd5zYaZZFI1UwTkFXV
KJt3bp3Ngn1vEYXwijGTa+FXz6GLHueJwF0I7ug34DgUkAFvAs8Hacr2DRYxL5RJ
XdNgj4Jd2/g6T9InmWT0hASljur+dJnzNiNCkbn9KbX7J/qK1IbR8y560yRmFsU+
NdCFTW7wY0Fb1fWJ+/KTsC4=
=J6gs
-----END PGP PUBLIC KEY BLOCK-----
microsoft-prod.list:
source: "deb https://packages.microsoft.com/ubuntu/20.04/prod focal main"
key: |
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1.4.7 (GNU/Linux)
mQENBFYxWIwBCADAKoZhZlJxGNGWzqV+1OG1xiQeoowKhssGAKvd+buXCGISZJwT
LXZqIcIiLP7pqdcZWtE9bSc7yBY2MalDp9Liu0KekywQ6VVX1T72NPf5Ev6x6DLV
7aVWsCzUAF+eb7DC9fPuFLEdxmOEYoPjzrQ7cCnSV4JQxAqhU4T6OjbvRazGl3ag
OeizPXmRljMtUUttHQZnRhtlzkmwIrUivbfFPD+fEoHJ1+uIdfOzZX8/oKHKLe2j
H632kvsNzJFlROVvGLYAk2WRcLu+RjjggixhwiB+Mu/A8Tf4V6b+YppS44q8EvVr
M+QvY7LNSOffSO6Slsy9oisGTdfE39nC7pVRABEBAAG0N01pY3Jvc29mdCAoUmVs
ZWFzZSBzaWduaW5nKSA8Z3Bnc2VjdXJpdHlAbWljcm9zb2Z0LmNvbT6JATUEEwEC
AB8FAlYxWIwCGwMGCwkIBwMCBBUCCAMDFgIBAh4BAheAAAoJEOs+lK2+EinPGpsH
/32vKy29Hg51H9dfFJMx0/a/F+5vKeCeVqimvyTM04C+XENNuSbYZ3eRPHGHFLqe
MNGxsfb7C7ZxEeW7J/vSzRgHxm7ZvESisUYRFq2sgkJ+HFERNrqfci45bdhmrUsy
7SWw9ybxdFOkuQoyKD3tBmiGfONQMlBaOMWdAsic965rvJsd5zYaZZFI1UwTkFXV
KJt3bp3Ngn1vEYXwijGTa+FXz6GLHueJwF0I7ug34DgUkAFvAs8Hacr2DRYxL5RJ
XdNgj4Jd2/g6T9InmWT0hASljur+dJnzNiNCkbn9KbX7J/qK1IbR8y560yRmFsU+
NdCFTW7wY0Fb1fWJ+/KTsC4=
=J6gs
-----END PGP PUBLIC KEY BLOCK-----
kubernetes:
source: "deb http://apt.kubernetes.io/ kubernetes-xenial main"
keyid: 7F92E05B31093BEF5A3C2D38FEEA9169307EA071
# Install packages via apt. To add packages it might be required to add additional sources above.
packages:
- unzip
- git
- wget
- curl
- apt-transport-https
- software-properties-common
- powershell
- azure-cli
- npm
- docker.io
- packages-microsoft-prod
- dotnet-sdk-6.0
- kubectl
# Install latest version of azcopy (can not be installed via apt)
runcmd:
# Download AzCopy and extract archive
- wget https://aka.ms/downloadazcopy-v10-linux
- tar -xvf downloadazcopy-v10-linux
# Move AzCopy to the destination
- sudo cp ./azcopy_linux_amd64_*/azcopy /usr/bin/
# Allow execution for all users
- sudo chmod +x /usr/bin/azcopy
# create the docker group
groups:
- docker
# Add default auto created user to docker group
system_info:
default_user:
groups: [docker]
final_message: "The system is finally up, after $UPTIME seconds"
This is the below solution finally I could able to accomplish via terraform using custom data.
terraform {
required_version = ">=0.12"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~>2.0"
}
}
}
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "vmss" {
name = var.resource_group_name
location = var.location
tags = var.tags
}
resource "random_string" "fqdn" {
length = 6
special = false
upper = false
number = false
}
resource "azurerm_virtual_network" "vmss" {
name = "vmss-vnet"
address_space = ["10.0.0.0/16"]
location = var.location
resource_group_name = azurerm_resource_group.vmss.name
tags = var.tags
}
resource "azurerm_subnet" "vmss" {
name = "vmss-subnet"
resource_group_name = azurerm_resource_group.vmss.name
virtual_network_name = azurerm_virtual_network.vmss.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_public_ip" "vmss" {
name = "vmss-public-ip"
location = var.location
resource_group_name = azurerm_resource_group.vmss.name
allocation_method = "Static"
domain_name_label = random_string.fqdn.result
tags = var.tags
}
resource "azurerm_virtual_machine_scale_set" "vmss" {
name = "vmscaleset"
location = var.location
resource_group_name = azurerm_resource_group.vmss.name
upgrade_policy_mode = "Manual"
sku {
name = "Standard_DS1_v2"
tier = "Standard"
capacity = 2
}
storage_profile_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
storage_profile_os_disk {
name = ""
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name_prefix = "vmlab"
admin_username = var.admin_user
admin_password = var.admin_password
**custom_data = file("test.sh") **// This is the key line to pass any custom data to VMSS so that during VM spin up each time automatically script will be invoked and will be executed.**
}
os_profile_linux_config {
disable_password_authentication = false
}
network_profile {
name = "terraformnetworkprofile"
primary = true
ip_configuration {
name = "IPConfiguration"
subnet_id = azurerm_subnet.vmss.id
#load_balancer_backend_address_pool_ids = [azurerm_lb_backend_address_pool.bpepool.id]
primary = true
}
}
tags = var.tags
}
I am using Terraform to create Azure VMs, but since those don't have much functionality installed, I was investigating on other Azure resources. I found the Azure Data Science VM is the one that covers most of my requirements, so I was wondering if there is a way to create those with Terraform. I can't see it in the documentation, but maybe there is a workaround.
Any orientation on this would be great!
Assumption
Azure Resource Model.
Steps
There will be several steps to this process. You'll firstly need to retrieve a platform image.
data "azurerm_platform_image" "test" {
location = "West Europe"
publisher = "Microsoft"
offer = "xx"
sku = "xx"
}
Before you can fully populate this however, you will need to retrieve the SKU and Offer. Annoyingly, this isn't readily available on the internet and requires an API call or Powershell fun.
This link will help you achieve this.
Once you've got the above terraform populated, you can then utilise this to create a virtual machine.
resource "azurerm_virtual_machine" "test" {
name = "acctvm"
location = "West US 2"
resource_group_name = "${azurerm_resource_group.test.name}"
network_interface_ids = ["${azurerm_network_interface.test.id}"]
vm_size = "Standard_DS1_v2"
storage_image_reference {
id = "${data.azurerm_platform_image.test.id}"
}
storage_os_disk {
name = "myosdisk1"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
# Optional data disks
storage_data_disk {
name = "datadisk_new"
managed_disk_type = "Standard_LRS"
create_option = "Empty"
lun = 0
disk_size_gb = "1023"
}
storage_data_disk {
name = "${azurerm_managed_disk.test.name}"
managed_disk_id = "${azurerm_managed_disk.test.id}"
create_option = "Attach"
lun = 1
disk_size_gb = "${azurerm_managed_disk.test.disk_size_gb}"
}
os_profile {
computer_name = "hostname"
admin_username = "testadmin"
admin_password = "Password1234!"
}
os_profile_linux_config {
disable_password_authentication = false
}
tags {
environment = "staging"
}
}
Follow steps here. To fill the terraform "storage_image_reference" part you can use the Azure CLI to get the information. So for example:
az vm image list --offer linux-data-science-vm --all --output table
Or
az vm image list --offer windows-data-science-vm --all --output table
Here is the list of SKUs and Offers for the Azure Data Science VM.
Windows Server 2016 edition: offer=windows-data-science-vm sku=windows2016
Ubuntu edition: offer=linux-data-science-vm-ubuntu sku=linuxdsvmubuntu
Windows Server 2012 edition: offer=standard-data-science-vm sku=standard-data-science-vm
CentOS edition: offer=linux-data-science-vm sku=linuxdsvm
Publisher for all these is microsoft-ads
I have successfully created a VM as part of a Resource Group on Azure using Terraform. Next step is to ssh in the new machine and run a few commands. For that, I have created a provisioner as part of the VM resource and set up an SSH connection:
resource "azurerm_virtual_machine" "helloterraformvm" {
name = "terraformvm"
location = "West US"
resource_group_name = "${azurerm_resource_group.helloterraform.name}"
network_interface_ids = ["${azurerm_network_interface.helloterraformnic.id}"]
vm_size = "Standard_A0"
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "14.04.2-LTS"
version = "latest"
}
os_profile {
computer_name = "hostname"
user = "some_user"
password = "some_password"
}
os_profile_linux_config {
disable_password_authentication = false
}
provisioner "remote-exec" {
inline = [
"sudo apt-get install docker.io -y"
]
connection {
type = "ssh"
user = "some_user"
password = "some_password"
}
}
}
If I run "terraform apply", it seems to get into an infinite loop trying to ssh unsuccessfully, repeating this log over and over:
azurerm_virtual_machine.helloterraformvm (remote-exec): Connecting to remote host via SSH...
azurerm_virtual_machine.helloterraformvm (remote-exec): Host:
azurerm_virtual_machine.helloterraformvm (remote-exec): User: testadmin
azurerm_virtual_machine.helloterraformvm (remote-exec): Password: true
azurerm_virtual_machine.helloterraformvm (remote-exec): Private key: false
azurerm_virtual_machine.helloterraformvm (remote-exec): SSH Agent: true
I'm sure I'm doing something wrong, but I don't know what it is :(
EDIT:
I have tried setting up this machine without the provisioner, and I can SSH to it no problems with the given username/passwd. However I need to look up the host name in the Azure portal because I don't know how to retrieve it from Terraform. It's suspicious that the "Host:" line in the log is empty, so I wonder if it has anything to do with that?
UPDATE:
I've tried with different things like indicating the host name in the connection with
host = "${azurerm_public_ip.helloterraformip.id}"
and
host = "${azurerm_public_ip.helloterraformips.ip_address}"
as indicated in the docs, but with no success.
I've also tried using ssh-keys instead of password, but same result - infinite loop of connection tries, with no clear error message as of why it's not connecting.
I have managed to make this work. I changed several things:
Gave name of host to connection.
Configured SSH keys properly - they need to be unencrypted.
Took the connection element out of the provisioner element.
Here's the full working Terraform file, replacing the data like SSH keys, etc.:
# Configure Azure provider
provider "azurerm" {
subscription_id = "${var.azure_subscription_id}"
client_id = "${var.azure_client_id}"
client_secret = "${var.azure_client_secret}"
tenant_id = "${var.azure_tenant_id}"
}
# create a resource group if it doesn't exist
resource "azurerm_resource_group" "rg" {
name = "sometestrg"
location = "ukwest"
}
# create virtual network
resource "azurerm_virtual_network" "vnet" {
name = "tfvnet"
address_space = ["10.0.0.0/16"]
location = "ukwest"
resource_group_name = "${azurerm_resource_group.rg.name}"
}
# create subnet
resource "azurerm_subnet" "subnet" {
name = "tfsub"
resource_group_name = "${azurerm_resource_group.rg.name}"
virtual_network_name = "${azurerm_virtual_network.vnet.name}"
address_prefix = "10.0.2.0/24"
#network_security_group_id = "${azurerm_network_security_group.nsg.id}"
}
# create public IPs
resource "azurerm_public_ip" "ip" {
name = "tfip"
location = "ukwest"
resource_group_name = "${azurerm_resource_group.rg.name}"
public_ip_address_allocation = "dynamic"
domain_name_label = "sometestdn"
tags {
environment = "staging"
}
}
# create network interface
resource "azurerm_network_interface" "ni" {
name = "tfni"
location = "ukwest"
resource_group_name = "${azurerm_resource_group.rg.name}"
ip_configuration {
name = "ipconfiguration"
subnet_id = "${azurerm_subnet.subnet.id}"
private_ip_address_allocation = "static"
private_ip_address = "10.0.2.5"
public_ip_address_id = "${azurerm_public_ip.ip.id}"
}
}
# create storage account
resource "azurerm_storage_account" "storage" {
name = "someteststorage"
resource_group_name = "${azurerm_resource_group.rg.name}"
location = "ukwest"
account_type = "Standard_LRS"
tags {
environment = "staging"
}
}
# create storage container
resource "azurerm_storage_container" "storagecont" {
name = "vhd"
resource_group_name = "${azurerm_resource_group.rg.name}"
storage_account_name = "${azurerm_storage_account.storage.name}"
container_access_type = "private"
depends_on = ["azurerm_storage_account.storage"]
}
# create virtual machine
resource "azurerm_virtual_machine" "vm" {
name = "sometestvm"
location = "ukwest"
resource_group_name = "${azurerm_resource_group.rg.name}"
network_interface_ids = ["${azurerm_network_interface.ni.id}"]
vm_size = "Standard_A0"
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
storage_os_disk {
name = "myosdisk"
vhd_uri = "${azurerm_storage_account.storage.primary_blob_endpoint}${azurerm_storage_container.storagecont.name}/myosdisk.vhd"
caching = "ReadWrite"
create_option = "FromImage"
}
os_profile {
computer_name = "testhost"
admin_username = "testuser"
admin_password = "Password123"
}
os_profile_linux_config {
disable_password_authentication = false
ssh_keys = [{
path = "/home/testuser/.ssh/authorized_keys"
key_data = "ssh-rsa xxx email#something.com"
}]
}
connection {
host = "sometestdn.ukwest.cloudapp.azure.com"
user = "testuser"
type = "ssh"
private_key = "${file("~/.ssh/id_rsa_unencrypted")}"
timeout = "1m"
agent = true
}
provisioner "remote-exec" {
inline = [
"sudo apt-get update",
"sudo apt-get install docker.io -y",
"git clone https://github.com/somepublicrepo.git",
"cd Docker-sample",
"sudo docker build -t mywebapp .",
"sudo docker run -d -p 5000:5000 mywebapp"
]
}
tags {
environment = "staging"
}
}
According to your description, Azure Custom Script Extension is an option for you.
The Custom Script Extension downloads and executes scripts on Azure
virtual machines. This extension is useful for post deployment
configuration, software installation, or any other configuration /
management task.
Remove provisioner "remote-exec" instead of below:
resource "azurerm_virtual_machine_extension" "helloterraformvm" {
name = "hostname"
location = "West US"
resource_group_name = "${azurerm_resource_group.helloterraformvm.name}"
virtual_machine_name = "${azurerm_virtual_machine.helloterraformvm.name}"
publisher = "Microsoft.OSTCExtensions"
type = "CustomScriptForLinux"
type_handler_version = "1.2"
settings = <<SETTINGS
{
"commandToExecute": "apt-get install docker.io -y"
}
SETTINGS
}
Note: Command is executed by root user, don't use sudo.
More information please refer to this link: azurerm_virtual_machine_extension.
For a list of possible extensions, you can use the Azure CLI command az vm extension image list -o table
Update: The above example only supports single command. If you need to multiple commands. Like install docker on your VM, you need
apt-get update
apt-get install docker.io -y
Save it as a file named script.sh and save it to Azure Storage account or GitHub(The file should be public). Modify terraform file like below:
settings = <<SETTINGS
{
"fileUris": ["https://gist.githubusercontent.com/Walter-Shui/dedb53f71da126a179544c91d267cdce/raw/bb3e4d90e3291530570eca6f4ff7981fdcab695c/script.sh"],
"commandToExecute": "sh script.sh"
}
SETTINGS