I'm trying to populate files on a Ubuntu 16.04 server created in Azure using Terraform v0.9.3 using the file provisioner from OSX Sierra. No file tests work, even when I try to copy into publicly writable directories (/var/tmp, /tmp). Is this another "works in AWS but doesn't work with azurerm features? Nothing from google.
Terraform snippet
#copy app file into place:
provisioner "file" {
source = "/Users/person/Terraform/Azure/files/busybox.sh"
destination = "/var/tmp/busybox.sh"
}
#can I copy as root?:
provisioner "file" {
source = "/Users/person/Terraform/Azure/files/random_file"
destination = "/root/QWERTYFILE"
}
#can I copy anywhere?:
provisioner "file" {
source = "/Users/person/Coding/Azure/files/random_file"
destination = "/tmp/"
}
Did you add the connection session as below? Let me know if it works or not
# Copies the file as the root user using SSH
provisioner "file" {
source = "conf/myapp.conf"
destination = "/etc/myapp.conf"
connection {
type = "ssh"
user = "root"
password = "${var.root_password}"
}
}
You can set private_key if you don't want to use a password.
private_key - The contents of an SSH key to use for the connection. These can be loaded from a file on disk using the file() interpolation function. This takes preference over the password if provided.
private_key = "${file("${path.module}/my-private-key")}"
Refer:
Provisioner Connections
Related
I am trying to create a script with terraform that sends a file contained in a folder and overwrites the old file on the server if it has changed since.
I succeeded in sending the file on the server, however when I do the command: "terraform plan" after having modified my file it tells me that my configuration has not changed, and I don't want to have to modify an environment variable,
I would like it to be done automatically, has anyone ever had to deal with this situation?
My try #1:
resource "null_resource" "example" {
provisioner "file" {
source = "${path.cwd}/conf/file"
destination = "/home/user/conf/file"
connection {
host = "IP"
user = "user"
private_key = file("C:\\Users\\user\\.ssh\\sshTerraformDeployment")
}
}
}
Try #2:
resource "null_resource" "example" {
provisioner "remote-exec" {
inline = [
"set -e",
"cd /home/user/conf",
"rsync --ignore-existing --checksum -avz -e 'ssh -i /root/.ssh/sshTerraformDeployment' ${path.cwd}/conf root#host:/home/user/conf"
]
connection {
host = "IP"
user = "user"
private_key = file("C:\\Users\\user\\.ssh\\sshTerraformDeployment")
}
}
}
I have a proxmox server, where it has running VM-103, I need to send a file or iso to VM-103 using terraform. Does anybody have an idea to use file provisioner to send local file to existing remote VM. Any suggestions or leads will be appreciated.
in terraform :
resource "null_resource" "ssh_target" {
connection {
type = "ssh"
user = user
host = IP
private_key = file(var.ssh_key)
}
provisioner "file" {
source = ".."
destination = "/tmp/default"
}
}
Use Ansible , is more easy , isint ?
Using terraform I want to copy certain files as soon as the VM is up and running. Trying to do this using File provisioned. But no luck yet. Below is the error I get
Error: Failed to read ssh private key: no key found
I am using a windows machine and using this host machine to copy files to a certain location in a new VM.
resource "null_resource" remoteExecProvisionerWFolder {
depends_on = [
azurerm_virtual_machine.bastion_vm
]
provisioner "file" {
source = "test.txt"
destination = "/tmp/test.txt"
}
connection {
host = data.azurerm_public_ip.test.ip_address
type = "ssh"
user = var.usernameprivate_key = file("./id_rsa_xyz.ppk")
timeout = "2m"
agent = "false"
}
Appriciate a quick response.
Try the following
connection {
host = data.azurerm_public_ip.test.ip_address
type = "ssh"
user = var.username
private_key = file("./id_rsa_xyz")
timeout = "2m"
agent = "false"
}
Otherwise create a private key with openssh
https://learn.microsoft.com/en-us/windows-server/administration/openssh/openssh_keymanagement#user-key-generation
And then use that pair instead. It usually works with .pem files
I want to push the terraform state file to a github repo. The file function in Terraform fails to read .tfstate files, so I need to change their extension to .txt first. Now to automate it, I created a null resource which has a provisioner to run the command to copy the tfstate file as a txt file in the same directory. I came across this 'depends_on' argument which lets you specify if a particular resource needs to be made first before running the current. However, it is not working and I am straight away getting the error that 'terraform.txt' file doesn't exit when the file function demands it.
provider "github" {
token = "TOKEN"
owner = "USERNAME"
}
resource "null_resource" "tfstate_to_txt" {
provisioner "local-exec" {
command = "copy terraform.tfstate terraform.txt"
}
}
resource "github_repository_file" "state_push" {
repository = "TerraformStates"
file = "terraform.tfstate"
content = file("terraform.txt")
depends_on = [null_resource.tfstate_to_txt]
}
The documentation for the file function explains this behavior:
This function can be used only with files that already exist on disk at the beginning of a Terraform run. Functions do not participate in the dependency graph, so this function cannot be used with files that are generated dynamically during a Terraform operation. We do not recommend using dynamic local files in Terraform configurations, but in rare situations where this is necessary you can use the local_file data source to read files while respecting resource dependencies.
This paragraph also includes a suggestion for how to get the result you wanted: use the local_file data source, from the hashicorp/local provider, to read the file as a resource operation (during the apply phase) rather than as part of configuration loading:
resource "null_resource" "tfstate_to_txt" {
triggers = {
source_file = "terraform.tfstate"
dest_file = "terraform.txt"
}
provisioner "local-exec" {
command = "copy ${self.triggers.source_file} ${self.triggers.dest_file}"
}
}
data "local_file" "state" {
filename = null_resource.tfstate_to_txt.triggers.dest_file
}
resource "github_repository_file" "state_push" {
repository = "TerraformStates"
file = "terraform.tfstate"
content = data.local_file.state.content
}
Please note that although the above should get the order of operations you were asking about, reading the terraform.tfstate file while Terraform running is a very unusual thing to do, and is likely to result in undefined behavior because Terraform can repeatedly update that file at unpredictable moments throughout terraform apply.
If your intent is to have Terraform keep the state in a remote system rather than on local disk, the usual way to achieve that is to configure remote state, which will then cause Terraform to keep the state only remotely, and not use the local terraform.tfstate file at all.
depends_on does not really work with null_resource.provisioner.
here's a workaround that can help you :
resource "null_resource" "tfstate_to_txt" {
provisioner "local-exec" {
command = "copy terraform.tfstate terraform.txt"
}
}
resource "null_resource" "delay" {
provisioner "local-exec" {
command = "sleep 20"
}
triggers = {
"before" = null_resource.tfstate_to_txt.id
}
}
resource "github_repository_file" "state_push" {
repository = "TerraformStates"
file = "terraform.tfstate"
content = file("terraform.txt")
depends_on = ["null_resource.delay"]
}
the delay null resource will make sure the resource 2 runs after the first if the copy command takes more time just change the sleep to higher number
I would like to generate ssh keys with a local-exec then read the content of the file.
resource "null_resource" "generate-ssh-keys-pair" {
provisioner "local-exec" {
command = <<EOT
ssh-keygen -t rsa -b 4096 -C "test" -P "" -f "testkey"
EOT
}
}
data "local_file" "public-key" {
depends_on = [null_resource.generate-ssh-keys-pair]
filename = "testkey.pub"
}
data "local_file" "private-key" {
depends_on = [null_resource.generate-ssh-keys-pair]
filename = "testkey"
}
terraform plan works but when I run the apply, I got error on testkey and testkey.pub don't exist.
Thanks
Instead of generating a file using an external command and then reading it in, I would suggest to use the Terraform tls provider to generate the key within Terraform itself, using tls_private_key:
terraform {
required_providers {
tls = {
source = "hashicorp/tls"
}
}
}
resource "tls_private_key" "example" {
algorithm = "RSA"
rsa_bits = 4096
}
The tls_private_key resource type exports two attributes that are equivalent to the two files you were intending to read in your example:
tls_private_key.example.private_key_pem: the private key in PEM format
tls_private_key.example.public_key_openssh: the public key in the format OpenSSH expects to find in .ssh/authorized_keys.
Please note the warning in the tls_private_key documentation that using this resource will cause the private key data to be saved in your Terraform state, and so you should protect that state data accordingly. That would also have been true for your approach of reading files from disk using data resources, because any value Terraform has available for use in expressions must always be stored in the state.
I run your code and there are no problems with it. It correctly generates testkey and testkey.pub.
So whatever causes it to fail for you, its not this snipped of code you've provided in the question. The fault must be outside the code snipped.
Generating an SSH key in Terraform is inherently insecure because it gets stored in the tfstate file, however I had a similar problem to solve and thought that the most secure/usable was to use a secret management service + using a cloud bucket for the backend:
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "4.3.0"
}
}
backend "gcs" {
bucket = "tfstatebucket"
prefix = "terraform/production"
}
}
//Import Ansible private key from Google Secrets Manager
resource "google_secret_manager_secret" "ansible_private_key" {
secret_id = var.ansible_private_key_secret_id
replication {
automatic = true
}
}
data "google_secret_manager_secret_version" "ansible_private_key"{
secret = google_secret_manager_secret.ansible_private_key.secret_id
version = var.ansible_private_key_secret_id_version_number
}
resource "local_file" "ansible_imported_local_private_key" {
sensitive_content = data.google_secret_manager_secret_version.ansible_private_key.secret_data
filename = var.ansible_imported_local_private_key_filename
file_permission = "0600"
}
In the case of GCP, I would add the secret in Google Secrets Manager, then use terraform import on the secret, which will in turn write it to the backend bucket. That way it doesn't get stored in Git as plain text, you can have the key file local to your project (.terraform shouldn't be under version control), and it's arguably more secure in the bucket.
So the workflow essentially is:
Human --> Secret Manager
Secret Manager --> Terraform Import --> GCS Bucket Backend
|--> Create .terraform/ssh_key
.terraform/ssh_key --> Terraform/Ansible/Whatever
Hashicorp Vault would be another way to address this