I am trying to execute remote-exec provisioner when deploying a VM in Azure but inline code in remote-exec never executes.
Here is my provisioner and connection code:
provisioner "remote-exec" {
inline = [
"touch newfile.txt",
"touch newfile2.txt",
]
}
connection {
type = "ssh"
host = "${azurerm_public_ip.publicip.ip_address}"
user = "testuser"
private_key = "${file("~/.ssh/id_rsa")}"
agent = false
}
Code never executes and gives the error:
Error: Failed to read ssh private key: no key found
The key (id_rsa) is saved in the same location of the VM where I am running the main.tf file.
Please suggest what is wrong here.
As #ydaetskcoR comment, your code private_key = "${file("~/.ssh/id_rsa")}" indicated that the private key should exist at .ssh/id_rsa under your home directoty like /home/username on linux or C:\Users\username on windows.
You could save that key (id_rsa) in that directory as your code, otherwise, you need to add the current path of the key in your code.
For example, edit it to private_key = "${file("${path.module}/id_rsa")}"
Related
Hello I have a terraform proyect, currently I download the proyect in the host and I use terraform apply there, but I want to try to make this using ssh, in order to do that I have the following code:
resource "kubernetes_namespace" "main" {
connection {
type = "ssh"
user = "root"
private_key = var.private_key
host = var.host
host_key = var.host_key
}
metadata {
name = var.namespace
}
}
I didn't have password, because I only use ssh -i privatekey user#host to get access to that host.
But I get the following error:
Error: the server could not find the requested resource (post namespaces)
the provider is the following:
provider "kubernetes" {
config_path = "~/.kube/config"
config_context = "microk8s"
}
and the file and the context are correct.
EDIT
Terraform looks kubeconfig in the localhost instead of the remote host.
How can I solve this and apply changes remotely using ssh?
Thanks
I'm looking for a way to see what is going on during creation of a virtual machine since I use complex cluster configuration and to test if its working I need to be able to see the output and in some cases I'm not due to sensitive. This is related to running remote-exec option
module.MongoInstall.azurerm_virtual_machine.MongoVirtualMachine[2] (remote-exec): (output suppressed due to sensitive value in config)
could you please help me
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/mongo-activate.sh",
"cd /tmp",
"sudo ./mongo-activate.sh ${var.username} ${var.vmpassword} ${var.mongopassword} ${local.isCluster} ${join("," ,azurerm_public_ip.MongoPublicIpAddress.*.fqdn)} ${var.hasArbiter}",
"rm mongo-activate.sh",
]
connection {
type = "ssh"
host = "${element(azurerm_public_ip.MongoPublicIpAddress.*.ip_address, 0)}"
user = "${var.username}"
password = var.vmpassword
timeout = "15m"
}
}
Example of variables:
variable "vmpassword" {
default = "testtesttest" //psw:mongo VM
}
Thank You Adnriy for your suggestion.
Yes we can not see the sesitive value in console like password and Key after terraform apply .Terraform will suppress logging from the provisioner.If provisioner configuration or connection info includes sensitive values, we need to unmark them before calling the provisioner. Failing to do so causes serialization to error.
Like you can see in below image
In my terraform I have mysql module as follows:
# create ssh tunnel to RDS instance
resource "null_resource" "ssh_tunnel" {
provisioner "local-exec" {
command = "ssh -i ${var.private_key} -L 3306:${var.rds_endpoint} -fN ec2-user#${var.bastion_ip} -v >./stdout.log 2>./stderr.log"
}
triggers = {
always_run = timestamp()
}
}
# create database
resource "mysql_database" "rds" {
name = var.db_name
depends_on = [null_resource.ssh_tunnel]
}
When I'm adding new module and running terraform apply first time it works as expected.
But when terraform apply running without any changes I'm getting an error:
Could not connect to server: dial tcp 127.0.0.1:3306: connect: connection refused
If I understand correctly, provisioner "local-exec" should execute script every time due to the trigger settings. Could you explain how should it works properly?
I suspect that this happens because your first local-exec creates the tunnel in the background (-f). Then second execution fails because the first tunnel still exists. You do not close it at all in your code. You would have to extend your code to check for an existence of tunnels and maybe properly close them when you are done using them.
Finally I've implemented this solution https://registry.terraform.io/modules/flaupretre/tunnel/ssh/latest instead of using null_resource.
I'm trying to provision a databricks with a pat token with a null_resource and local-exec.
this is the code block:
resource "null_resource" "databricks_token" {
triggers = {
workspace = azurerm_databricks_workspace.databricks.id
key_vault_access = azurerm_key_vault_access_policy.terraform.id
}
provisioner "local-exec" {
command = "${path.cwd}/generate-pat-token.sh"
environment = {
RESOURCE_GROUP = var.resource_group_name
DATABRICKS_WORKSPACE_RESOURCE_ID = azurerm_databricks_workspace.databricks.id
KEY_VAULT = azurerm_key_vault.databricks_token.name
SECRET_NAME = "DATABRICKS-TOKEN"
DATABRICKS_ENDPOINT = "https://westeurope.azuredatabricks.net"
}
}
}
however, I get the following error:
2020-02-26T19:41:51.9455473Z [0m[1mnull_resource.databricks_token: Provisioning with 'local-exec'...[0m[0m
2020-02-26T19:41:51.9458257Z [0m[0mnull_resource.databricks_token (local-exec): Executing: ["/bin/sh" "-c" "/home/vsts/work/r1/a/_Infrastructure/Infrastructure/ei-project/devtest/generate-pat-token.sh"]
2020-02-26T19:41:51.9480441Z [0m[0mnull_resource.databricks_token (local-exec): /bin/sh: 1: /home/vsts/work/r1/a/_Infrastructure/Infrastructure/ei-project/devtest/generate-pat-token.sh: Permission denied
2020-02-26T19:41:51.9481502Z [0m[0m
2020-02-26T19:41:52.0386092Z [31m
2020-02-26T19:41:52.0399075Z [1m[31mError: [0m[0m[1mError running command '/home/vsts/work/r1/a/_Infrastructure/Infrastructure/ei-project/devtest/generate-pat-token.sh': exit status 126. Output: /bin/sh: 1: /home/vsts/work/r1/a/_Infrastructure/Infrastructure/ei-project/devtest/generate-pat-token.sh: Permission denied
2020-02-26T19:41:52.0401076Z [0m
2020-02-26T19:41:52.0401373Z
2020-02-26T19:41:52.0401978Z [0m[0m[0m
side note, this is with Azure DevOps
Any idea how to solve the permission denied ?
The root of this problem is with how Azure DevOps stores artifacts and repositories. Here is a snippet from their documentation explaining why this happens.
https://learn.microsoft.com/en-us/azure/devops/pipelines/artifacts/build-artifacts?view=azure-devops&tabs=yaml#download-to-debug
Under the TIPS, section you will see the following:
Build artifacts are stored on a Windows filesystem, which causes all UNIX permissions to be lost, including the execution bit. You might need to restore the correct UNIX permissions after downloading your artifacts from Azure Pipelines or TFS.
This means that your files downloaded (in this case your shell script) have all unix permissions wiped. To fix this problem, I add a step to first set the proper permissions on the shell script before executing the shell script. See the below example where I have added the fix to the code you provided.
resource "null_resource" "databricks_token" {
triggers = {
workspace = azurerm_databricks_workspace.databricks.id
key_vault_access = azurerm_key_vault_access_policy.terraform.id
}
provisioner "local-exec" {
command = "chmod +x ${path.cwd}/generate-pat-token.sh; ${path.cwd}/generate-pat-token.sh"
environment = {
RESOURCE_GROUP = var.resource_group_name
DATABRICKS_WORKSPACE_RESOURCE_ID = azurerm_databricks_workspace.databricks.id
KEY_VAULT = azurerm_key_vault.databricks_token.name
SECRET_NAME = "DATABRICKS-TOKEN"
DATABRICKS_ENDPOINT = "https://westeurope.azuredatabricks.net"
}
}
}
The command section will first set the execute permissions on the shell script and then execute it.
I created the aws_db_instance to provision the RDS MySQL database using Terraform configuration. I need to execute the SQL Scripts (CREATE TABLE and INSERT statements) on the RDS. I'm stuck on what command to use here?? Anyone has the sample code in my use case? Please advise. Thanks.
resource "aws_db_instance" "mydb" {
# ...
provisioner "local-exec" {
command = "command to execute script.sql"
}
}
This is possible using a null_resource that depends on the aws_db_instance.my_db. This way the host is available when you run the command, it will only work if there aren't rules preventing you from accessing the DB such as security group ingress or not publicly accessible.
Example:
resource "null_resource" "setup_db" {
depends_on = ["aws_db_instance.my_db"] #wait for the db to be ready
provisioner "local-exec" {
command = "mysql -u ${aws_db_instance.my_db.username} -p${var.my_db_password} -h ${aws_db_instance.my_db.address} < file.sql"
}
}
I don't believe you can use a provisioner with that type of resource. One option you could explore is having an additional step that takes the address of the RDS instance from a Terraform output and runs the SQL script.
So, for instance in a CI environment, you'd have Create Database -> Load Database -> Finished.
Below would be you Terraform to create and output the resource address.
resource "aws_db_instance" "mydb" {
# ...
provisioner "local-exec" {
command = "command to execute script.sql"
}
}
output "username" {
value = "${aws_db_instance.mydb.username}"
}
output "address" {
value = "${aws_db_instance.mydb.address}"
}
The Load Database step would then run a shell script with the SQL logic and the following to obtain the address of the instance - terraform output address