Send and overwrote old file with terraform on VPS - terraform

I am trying to create a script with terraform that sends a file contained in a folder and overwrites the old file on the server if it has changed since.
I succeeded in sending the file on the server, however when I do the command: "terraform plan" after having modified my file it tells me that my configuration has not changed, and I don't want to have to modify an environment variable,
I would like it to be done automatically, has anyone ever had to deal with this situation?
My try #1:
resource "null_resource" "example" {
provisioner "file" {
source = "${path.cwd}/conf/file"
destination = "/home/user/conf/file"
connection {
host = "IP"
user = "user"
private_key = file("C:\\Users\\user\\.ssh\\sshTerraformDeployment")
}
}
}
Try #2:
resource "null_resource" "example" {
provisioner "remote-exec" {
inline = [
"set -e",
"cd /home/user/conf",
"rsync --ignore-existing --checksum -avz -e 'ssh -i /root/.ssh/sshTerraformDeployment' ${path.cwd}/conf root#host:/home/user/conf"
]
connection {
host = "IP"
user = "user"
private_key = file("C:\\Users\\user\\.ssh\\sshTerraformDeployment")
}
}
}

Related

Execute bash script on Ubutnu using Terraform

Is it possible to execute shell commands on Ubuntu OS using Terraform script?
I have to do some initial configuration before execution of Terraform scripts.
you could define a local-exec provisioner in your resource
provisioner "local-exec" {
command = "echo The server's IP address is ${self.private_ip}"
}
that will execute right after the resource is created, there are other types of provisioners see: https://www.terraform.io/language/resources/provisioners/syntax
Depends upon where your Ubuntu OS is, if its local then you can do something like this
resource "aws_instance" "web" {
# ...
provisioner "local-exec" {
command = "echo ${self.private_ip} >> private_ips.txt"
}
}
If its a remote resource for example an aws ec2 instance:
resource "aws_instance" "web" {
# ...
# Establishes connection to be used by all
# generic remote provisioners (i.e. file/remote-exec)
connection {
type = "ssh"
user = "root"
password = var.root_password
host = self.public_ip
}
provisioner "remote-exec" {
inline = [
"puppet apply",
"consul join ${aws_instance.web.private_ip}",
]
}
}
Also, if its an ec2-instance, one thing that is mostly used is defining a script using user_data which runs immediately after the resource is created with root privileges but only once and then will never run even if you reboot the instance. In terraform you can do something like this:
resource "aws_instance" "server" {
ami = "ami-123456"
instance_type = "t3.medium"
availability_zone = "eu-central-1b"
vpc_security_group_ids = [aws_security_group.server.id]
subnet_id = var.subnet1
private_ip = var.private-ip
key_name = var.key_name
associate_public_ip_address = true
tags = {
Name = "db-server"
}
user_data = <<EOF
mkdir abc
apt update && apt install nano
EOF
}

How to send directory/file using terraform?

I have a proxmox server, where it has running VM-103, I need to send a file or iso to VM-103 using terraform. Does anybody have an idea to use file provisioner to send local file to existing remote VM. Any suggestions or leads will be appreciated.
in terraform :
resource "null_resource" "ssh_target" {
connection {
type = "ssh"
user = user
host = IP
private_key = file(var.ssh_key)
}
provisioner "file" {
source = ".."
destination = "/tmp/default"
}
}
Use Ansible , is more easy , isint ?

Using function templatefile(path, vars) with a remote-exec provisioner

With terraform 0.12, there is a templatefile function but I haven't figured out the syntax for passing it a non-trivial map as the second argument and using the result to be executed remotely as the newly created instance's provisioning step.
Here's the gist of what I'm trying to do, although it doesn't parse properly because one can't just create a local variable within the resource block named scriptstr.
While I'm really trying to get the output of the templatefile call to be executed on the remote side, once the provisioner can ssh to the machine, I've so far gone down the path of trying to get the templatefile call output written to a local file via the local-exec provisioner. Probably easy, I just haven't found the documentation or examples to understand the syntax necessary. TIA
resource "aws_instance" "server" {
count = "${var.servers}"
ami = "${local.ami}"
instance_type = "${var.instance_type}"
key_name = "${local.key_name}"
subnet_id = "${element(aws_subnet.consul.*.id, count.index)}"
iam_instance_profile = "${aws_iam_instance_profile.consul-join.name}"
vpc_security_group_ids = ["${aws_security_group.consul.id}"]
ebs_block_device {
device_name = "/dev/sda1"
volume_size = 2
}
tags = "${map(
"Name", "${var.namespace}-server-${count.index}",
var.consul_join_tag_key, var.consul_join_tag_value
)}"
scriptstr = templatefile("${path.module}/templates/consul.sh.tpl",
{
consul_version = "${local.consul_version}"
config = <<EOF
"bootstrap_expect": ${var.servers},
"node_name": "${var.namespace}-server-${count.index}",
"retry_join": ["provider=aws tag_key=${var.consul_join_tag_key} tag_value=${var.consul_join_tag_value}"],
"server": true
EOF
})
provisioner "local-exec" {
command = "echo ${scriptstr} > ${var.namespace}-server-${count.index}.init.sh"
}
provisioner "remote-exec" {
script = "${var.namespace}-server-${count.index}.init.sh"
connection {
type = "ssh"
user = "clear"
private_key = file("${local.private_key_file}")
}
}
}
In your question I can see that the higher-level problem you seem to be trying to solve here is creating a pool of HashiCorp Consul servers and then, once they are all booted up, to tell them about each other so that they can form a cluster.
Provisioners are essentially a "last resort" in Terraform, provided out of pragmatism because sometimes logging in to a host and running commands on it is the only way to get a job done. An alternative available in this case is to instead pass the information from Terraform to the server via the aws_instance user_data argument, which will then allow the servers to boot up and form a cluster immediately, rather than being delayed until Terraform is able to connect via SSH.
Either way, I'd generally prefer to have the main body of the script I intend to run already included in the AMI so that Terraform can just run it with some arguments, since that then reduces the problem to just templating the invocation of that script rather than the whole script:
provisioner "remote-exec" {
inline = ["/usr/local/bin/init-consul --expect='${var.servers}' etc, etc"]
connection {
type = "ssh"
user = "clear"
private_key = file("${local.private_key_file}")
}
}
However, if templating an entire script is what you want or need to do, I'd upload it first using the file provisioner and then run it, like this:
provisioner "file" {
destination = "/tmp/consul.sh"
content = templatefile("${path.module}/templates/consul.sh.tpl", {
consul_version = "${local.consul_version}"
config = <<EOF
"bootstrap_expect": ${var.servers},
"node_name": "${var.namespace}-server-${count.index}",
"retry_join": ["provider=aws tag_key=${var.consul_join_tag_key} tag_value=${var.consul_join_tag_value}"],
"server": true
EOF
})
}
provisioner "remote-exec" {
inline = ["sh /tmp/consul.sh"]
}

Run Destroy-Time Provisioner before local_file is deleted

I have a Terraform script which creates a config.json file and then runs a command that uses that config.json:
resource "local_file" "config" {
# Output vars to config
filename = "config.json"
content = "..."
# Deploy using config
provisioner "local-exec" {
command = "deploy"
}
}
This all works great, but when I run terraform destroy I'd like to run a different command - I tried to do this with a destroy-time provisioner in a null_resource by adding the following:
resource "null_resource" "test" {
provisioner "local-exec" {
when = "destroy"
command = "delete"
}
}
The script is run, but it runs after the config file is deleted - it errors, because it needs that config file to exist for it to know what to delete.
How would I fix this?
Thanks!
I moved the destroy time provisioner into the original resource, and it worked great:
resource "local_file" "config" {
# Output vars to config
filename = "config.json"
content = "..."
# Deploy using config
provisioner "local-exec" {
command = "deploy"
}
# Delete on_destroy
provisioner "local-exec" {
when = "destroy"
command = "delete"
}
}

How to get remote-exec provisioner to apply after disk attachments?

I have a script that I need to run after my instance has been provisioned and the volumes have been attached:
resource "aws_instance" "controller" {
...
provisioner "remote-exec" {
connection {
type = "ssh"
user = "centos"
}
inline = [
"download and run script to verify environment"
]
}
}
resource "aws_ebs_volume" "controller-ebs-sdb" {
...
}
resource "aws_volume_attachment" "controller-volume-attachment-sdb" {
device_name = "/dev/sdb"
volume_id = "${aws_ebs_volume.controller-ebs-sdb.id}"
instance_id = "${aws_instance.controller.id}"
}
Currently the script is failing the environment because when it runs the volume has not been attached.
Is it possible to only run the remote-exec script after the volumes have been attached?
You can run a provisioner on any resource (consider the null_resource pattern for an extreme version of this) so the best thing here is to run it on the aws_volume_attachment resource:
# ...
resource "aws_volume_attachment" "controller-volume-attachment-sdb" {
device_name = "/dev/sdb"
volume_id = "${aws_ebs_volume.controller-ebs-sdb.id}"
instance_id = "${aws_instance.controller.id}"
provisioner "remote-exec" {
connection {
host = "${aws_instance.controller.public_ip}"
type = "ssh"
user = "centos"
}
inline = [
"download and run script to verify environment"
]
}
}
You can consider adding a trigger option in remote-exec. Other crude option is to add a sleep for some seconds or, the script can retry itself, or check the status/existence of the disk and then attempt.

Resources