I have a proxmox server, where it has running VM-103, I need to send a file or iso to VM-103 using terraform. Does anybody have an idea to use file provisioner to send local file to existing remote VM. Any suggestions or leads will be appreciated.
in terraform :
resource "null_resource" "ssh_target" {
connection {
type = "ssh"
user = user
host = IP
private_key = file(var.ssh_key)
}
provisioner "file" {
source = ".."
destination = "/tmp/default"
}
}
Use Ansible , is more easy , isint ?
Related
I have a Terraform infrastructure where I have to install a Windows 10 server EC2 instance with an additional volume (D:).
The Terraform configuration is quite easy and well explained in volume_attachment documentation.
To init, attach and format the new volume I found this answer, I tested it (by hands) and it works as expected.
The problem is how to automate everything: the aws_volume_attachment depends on aws_instance so I can't run the script to init, attach and format the new volume in the user_data section of the aws_instance since the aws_volume_attachment is not yet created by Terraform.
I'm trying to execute the script with a null_resource.
To configure the instance image with Packer I'm using WinRM with following configuration:
source "amazon-ebs" "windows" {
ami_name = var.image_name
communicator = "winrm"
instance_type = "t2.micro"
user_data_file = "setup.txt"
winrm_insecure = true
winrm_port = 5986
winrm_use_ssl = true
winrm_username = "Administrator"
so I tried to replicate the same connection for the null_resource in Terraform:
resource "null_resource" "it" {
depends_on = [aws_volume_attachment.it]
triggers = { instance_id = local.jenkins_build_win.id, volume_id = aws_ebs_volume.it.id }
connection {
host = local.jenkins_build_win.fqdn
https = true
insecure = true
password = local.jenkins_build_win.password
port = 5986
type = "winrm"
user = "Administrator"
}
provisioner "remote-exec" {
inline = [
"Initialize-Disk -Number 1 -PartitionStyle MBR",
"$part = New-Partition -DiskNumber 1 -UseMaximumSize -IsActive -AssignDriveLetter",
"Format-Volume -DriveLetter $part.DriveLetter -Confirm:$FALSE"
]
}
}
local.jenkins_build_win.fqdn and local.jenkins_build_win.password resolve correctly (I wrote them with a local_file resource and I can use them to connect to the instance with Remote Desktop), but Terraform can't connect to the instance. :(
Running Terraform with TF_LOG=trace the only detail I can get for the error is:
[DEBUG] connecting to remote shell using WinRM
[ERROR] error creating shell: unknown error Post "https://{fqdn}:5986/wsman": read tcp {local_ip}:44538->{remote_ip}:5986: read: connection reset by peer
while running Packer with PACKER_LOG=1 I can't get any details on WinRM connection: my intention was to compare the calls made by Packer with the ones done by Terraform to try to identify the problem...
I feel I'm stuck. :( Any idea?
Using terraform I want to copy certain files as soon as the VM is up and running. Trying to do this using File provisioned. But no luck yet. Below is the error I get
Error: Failed to read ssh private key: no key found
I am using a windows machine and using this host machine to copy files to a certain location in a new VM.
resource "null_resource" remoteExecProvisionerWFolder {
depends_on = [
azurerm_virtual_machine.bastion_vm
]
provisioner "file" {
source = "test.txt"
destination = "/tmp/test.txt"
}
connection {
host = data.azurerm_public_ip.test.ip_address
type = "ssh"
user = var.usernameprivate_key = file("./id_rsa_xyz.ppk")
timeout = "2m"
agent = "false"
}
Appriciate a quick response.
Try the following
connection {
host = data.azurerm_public_ip.test.ip_address
type = "ssh"
user = var.username
private_key = file("./id_rsa_xyz")
timeout = "2m"
agent = "false"
}
Otherwise create a private key with openssh
https://learn.microsoft.com/en-us/windows-server/administration/openssh/openssh_keymanagement#user-key-generation
And then use that pair instead. It usually works with .pem files
With terraform 0.12, there is a templatefile function but I haven't figured out the syntax for passing it a non-trivial map as the second argument and using the result to be executed remotely as the newly created instance's provisioning step.
Here's the gist of what I'm trying to do, although it doesn't parse properly because one can't just create a local variable within the resource block named scriptstr.
While I'm really trying to get the output of the templatefile call to be executed on the remote side, once the provisioner can ssh to the machine, I've so far gone down the path of trying to get the templatefile call output written to a local file via the local-exec provisioner. Probably easy, I just haven't found the documentation or examples to understand the syntax necessary. TIA
resource "aws_instance" "server" {
count = "${var.servers}"
ami = "${local.ami}"
instance_type = "${var.instance_type}"
key_name = "${local.key_name}"
subnet_id = "${element(aws_subnet.consul.*.id, count.index)}"
iam_instance_profile = "${aws_iam_instance_profile.consul-join.name}"
vpc_security_group_ids = ["${aws_security_group.consul.id}"]
ebs_block_device {
device_name = "/dev/sda1"
volume_size = 2
}
tags = "${map(
"Name", "${var.namespace}-server-${count.index}",
var.consul_join_tag_key, var.consul_join_tag_value
)}"
scriptstr = templatefile("${path.module}/templates/consul.sh.tpl",
{
consul_version = "${local.consul_version}"
config = <<EOF
"bootstrap_expect": ${var.servers},
"node_name": "${var.namespace}-server-${count.index}",
"retry_join": ["provider=aws tag_key=${var.consul_join_tag_key} tag_value=${var.consul_join_tag_value}"],
"server": true
EOF
})
provisioner "local-exec" {
command = "echo ${scriptstr} > ${var.namespace}-server-${count.index}.init.sh"
}
provisioner "remote-exec" {
script = "${var.namespace}-server-${count.index}.init.sh"
connection {
type = "ssh"
user = "clear"
private_key = file("${local.private_key_file}")
}
}
}
In your question I can see that the higher-level problem you seem to be trying to solve here is creating a pool of HashiCorp Consul servers and then, once they are all booted up, to tell them about each other so that they can form a cluster.
Provisioners are essentially a "last resort" in Terraform, provided out of pragmatism because sometimes logging in to a host and running commands on it is the only way to get a job done. An alternative available in this case is to instead pass the information from Terraform to the server via the aws_instance user_data argument, which will then allow the servers to boot up and form a cluster immediately, rather than being delayed until Terraform is able to connect via SSH.
Either way, I'd generally prefer to have the main body of the script I intend to run already included in the AMI so that Terraform can just run it with some arguments, since that then reduces the problem to just templating the invocation of that script rather than the whole script:
provisioner "remote-exec" {
inline = ["/usr/local/bin/init-consul --expect='${var.servers}' etc, etc"]
connection {
type = "ssh"
user = "clear"
private_key = file("${local.private_key_file}")
}
}
However, if templating an entire script is what you want or need to do, I'd upload it first using the file provisioner and then run it, like this:
provisioner "file" {
destination = "/tmp/consul.sh"
content = templatefile("${path.module}/templates/consul.sh.tpl", {
consul_version = "${local.consul_version}"
config = <<EOF
"bootstrap_expect": ${var.servers},
"node_name": "${var.namespace}-server-${count.index}",
"retry_join": ["provider=aws tag_key=${var.consul_join_tag_key} tag_value=${var.consul_join_tag_value}"],
"server": true
EOF
})
}
provisioner "remote-exec" {
inline = ["sh /tmp/consul.sh"]
}
I was trying to do
terraform apply
but getting below error
1 error(s) occurred:
digitalocean_droplet.testvm[0]: Resource 'digitalocean_droplet.testvm' not found for variable
'digitalocean_droplet.testvm.ipv4_address'
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with any
resources that successfully completed. Please address the error above
and apply again to incrementally change your infrastructure.
How can I pass the public ip of the created droplet to provisioner local-exec command.
Below is my .tf file
provider "digitalocean" {
token = "----TOKEN----"
}
resource "digitalocean_droplet" "testvm" {
count = "10"
name = "do-instance-${count.index}"
image = "ubuntu-16-04-x64"
size = "512mb"
region = "nyc3"
ipv6 = true
private_networking = false
ssh_keys = [
"----SSH KEY----"
]
provisioner "local-exec" {
command = "fab production deploy ${digitalocean_droplet.testvm.ipv4_address}"
}
}
Thanks in advance!
For local-exec provisioner you can make use of the self keyword. In this case it would be {self.ipv4_address}.
My guess is that your snippet would've worked if you don't put count=10 in the testvm droplet. You can also make use of ${count.index}
More info: https://www.terraform.io/docs/provisioners/
Also, found this github issue that might be helpful to you.
Hope it helps
I'm trying to populate files on a Ubuntu 16.04 server created in Azure using Terraform v0.9.3 using the file provisioner from OSX Sierra. No file tests work, even when I try to copy into publicly writable directories (/var/tmp, /tmp). Is this another "works in AWS but doesn't work with azurerm features? Nothing from google.
Terraform snippet
#copy app file into place:
provisioner "file" {
source = "/Users/person/Terraform/Azure/files/busybox.sh"
destination = "/var/tmp/busybox.sh"
}
#can I copy as root?:
provisioner "file" {
source = "/Users/person/Terraform/Azure/files/random_file"
destination = "/root/QWERTYFILE"
}
#can I copy anywhere?:
provisioner "file" {
source = "/Users/person/Coding/Azure/files/random_file"
destination = "/tmp/"
}
Did you add the connection session as below? Let me know if it works or not
# Copies the file as the root user using SSH
provisioner "file" {
source = "conf/myapp.conf"
destination = "/etc/myapp.conf"
connection {
type = "ssh"
user = "root"
password = "${var.root_password}"
}
}
You can set private_key if you don't want to use a password.
private_key - The contents of an SSH key to use for the connection. These can be loaded from a file on disk using the file() interpolation function. This takes preference over the password if provided.
private_key = "${file("${path.module}/my-private-key")}"
Refer:
Provisioner Connections