Using terraform I want to copy certain files as soon as the VM is up and running. Trying to do this using File provisioned. But no luck yet. Below is the error I get
Error: Failed to read ssh private key: no key found
I am using a windows machine and using this host machine to copy files to a certain location in a new VM.
resource "null_resource" remoteExecProvisionerWFolder {
depends_on = [
azurerm_virtual_machine.bastion_vm
]
provisioner "file" {
source = "test.txt"
destination = "/tmp/test.txt"
}
connection {
host = data.azurerm_public_ip.test.ip_address
type = "ssh"
user = var.usernameprivate_key = file("./id_rsa_xyz.ppk")
timeout = "2m"
agent = "false"
}
Appriciate a quick response.
Try the following
connection {
host = data.azurerm_public_ip.test.ip_address
type = "ssh"
user = var.username
private_key = file("./id_rsa_xyz")
timeout = "2m"
agent = "false"
}
Otherwise create a private key with openssh
https://learn.microsoft.com/en-us/windows-server/administration/openssh/openssh_keymanagement#user-key-generation
And then use that pair instead. It usually works with .pem files
Related
I have a proxmox server, where it has running VM-103, I need to send a file or iso to VM-103 using terraform. Does anybody have an idea to use file provisioner to send local file to existing remote VM. Any suggestions or leads will be appreciated.
in terraform :
resource "null_resource" "ssh_target" {
connection {
type = "ssh"
user = user
host = IP
private_key = file(var.ssh_key)
}
provisioner "file" {
source = ".."
destination = "/tmp/default"
}
}
Use Ansible , is more easy , isint ?
After I run terraform apply and type 'yes' I get the following error 3 times (since I have 3 null resources):
Error: Unsupported attribute: This value does not have any attributes.
I checked each of my entries in my connection block and it seems to be coming from the host attribute. I believe the error is because ips.address is only generated after the server has launched while terraform wants a value for host before the BareMetal server has been deployed. Is there something wrong I'm doing here, either I'm using the wrong value (I've tried ips.id also) or I need to create some sort of output for when ips.address has been generated and then check host. I haven't been able to find any resources on BareMetal provisioning in ScaleWay. Here is my code with instance_number = 3.
provider "scaleway" {
access_key = var.ACCESS_KEY
secret_key = var.SECRET_KEY
organization_id = var.ORGANIZATION_ID
zone = "fr-par-2"
region = "fr-par"
}
resource "scaleway_account_ssh_key" "main" {
name = "main"
public_key = file("~/.ssh/id_rsa.pub")
}
resource "scaleway_baremetal_server" "base" {
count = var.instance_number
name = "${var.env_name}-BareMetal-${count.index}"
offer = var.baremetal_type
os = var.baremetal_image
ssh_key_ids = [scaleway_account_ssh_key.main.id]
tags = [ "BareMetal-${count.index}" ]
}
resource "null_resource" "ssh" {
count = var.instance_number
connection {
type = "ssh"
private_key = file("~/.ssh/id_rsa")
user = "root"
password = ""
host = scaleway_baremetal_server.base[count.index].ips.address
port = 22
}
provisioner "remote-exec" {
script = "provision/install_java_python.sh"
}
}
I am trying to provision a digital ocean droplet using Terraform. I appear to be missing the host argument in the connection block, but am not certain what value I need for digitalocean.
This is my configuration file:
resource "digitalocean_droplet" "test" {
image = "ubuntu-18-04-x64"
name = "test"
region = "nyc1"
size = "512mb"
private_networking = true
ssh_keys = [
"${var.ssh_fingerprint}"
]
connection {
user = "root"
type = "ssh"
private_key = "${file("~/.ssh/id_rsa")}"
timeout = "2m"
}
provisioner "remote-exec" {
inline = [
"export PATH=$PATH:/usr/bin",
# install nginx
"sudo apt-get update",
"sudo apt-get -y install nginx"
]
}
}
"terraform validate" gives me the error:
Error: Missing required argument
on frontend.tf line 11, in resource "digitalocean_droplet" "test":
11: connection {
The argument "host" is required, but no definition was found.
I fiddled around with this and found the answer.
In the connection block we should have the host as:
connection {
user = "root"
type = "ssh"
host = "${self.ipv4_address}"
private_key = "${file(var.pvt_key)}"
timeout = "2m"
}
You can explicitly reference the exported var:
connection {
user = "root"
host = "${digitalocean_droplet.test.ipv4_address}"
type = "ssh"
password = "${file(var.pvt_key)}"
}
I think there is a problem with your syntax.
Try to use like below:
private_key = file("/home/user/.ssh/id_rsa")
I'm using terraform version 0.12.25
Best of luck.
I can't figure out where is it trying to connect via SSH? Into the newly deployed resource?
How can diagnose this error in more detail?
Error: Error applying plan:
1 error occurred:
* module.deploy_nixos.null_resource.deploy_nixos: timeout - last error: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
data "google_compute_network" "default" {
name = "default"
}
resource "google_compute_firewall" "deploy-nixos" {
name = "deploy-nixos"
network = "${data.google_compute_network.default.name}"
allow {
protocol = "icmp"
}
// Allow SSH access
allow {
protocol = "tcp"
ports = ["22", "80", "443"]
}
source_tags = ["nixos"]
}
resource "google_compute_instance" "deploy-nixos" {
name = "deploy-nixos-example"
machine_type = "g1-small"
zone = "europe-west2-a"
# region = "eu-west2"
// Bind the firewall rules
tags = ["nixos"]
boot_disk {
initialize_params {
// Start with an image the deployer can SSH into
image = "${module.nixos_image_custom.self_link}"
size = "25"
}
}
network_interface {
network = "default"
// Give it a public IP
access_config {}
}
lifecycle {
// No need to re-deploy the machine if the image changed
// NixOS is already immutable
ignore_changes = ["boot_disk"]
}
}
module "deploy_nixos" {
source = "../../deploy_nixos"
// Deploy the given NixOS configuration. In this case it's the same as the
// original image. So if the configuration is changed later it will be
// deployed here.
nixos_config = "${path.module}/image_nixos_custom.nix"
target_user = "root"
target_host = "${google_compute_instance.deploy-nixos.network_interface.0.access_config.0.nat_ip}"
triggers = {
// Also re-deploy whenever the VM is re-created
instance_id = "${google_compute_instance.deploy-nixos.id}"
}
}
With debug output:
module.deploy_nixos.null_resource.deploy_nixos: Creating...
triggers.%: "" => "3"
triggers.deploy_nixos_drv: "" => "/nix/store/0dmz6dhqbk1g6ni3b92l95s377zbikaz-nixos-system-unnamed-19.03.172837.6c3826d1c93.drv"
triggers.deploy_nixos_keys: "" => "44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a"
triggers.instance_id: "" => "deploy-nixos-example"
module.deploy_nixos.null_resource.deploy_nixos: Provisioning with 'file'...
2019-06-08T22:31:00.030Z [DEBUG] plugin.terraform: file-provisioner (internal) 2019/06/08 22:31:00 [DEBUG] connecting to TCP connection for SSH
2019-06-08T22:31:00.041Z [DEBUG] plugin.terraform: file-provisioner (internal) 2019/06/08 22:31:00 [DEBUG] handshaking with SSH
2019-06-08T22:31:00.119Z [DEBUG] plugin.terraform: file-provisioner (internal) 2019/06/08 22:31:00 [WARN] ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
2019-06-08T22:31:00.119Z [DEBUG] plugin.terraform: file-provisioner (internal) 2019/06
Make sure your ssh key is added.
ssh-add ~/.ssh/id_rsa
Check the source of module (source = "../../deploy_nixos") null_resource may be defined there(It is not shown in the question here). you may have used terraform remote_exec or file provisioner there and you need to check connection properties in that.
Sample Terraform connection properties looks like below
provisioner "file" {
source = "conf/myapp.conf"
destination = "/etc/myapp.conf"
connection {
type = "ssh"
user = "root"
password = "${var.root_password}"
}
}
For more details check : https://www.terraform.io/docs/provisioners/connection.html
Im setting up and openstack instance using terraform. Im writing to a file the ip returned but for some reason its alwayse empty (i have looked at the instance in openstack consol and everythign is correct with ip, securitygroups etc etc)
resource "openstack_compute_instance_v2" "my-deployment-web" {
count = "1"
name = "my-name-WEB"
flavor_name = "m1.medium"
image_name = "RHEL7Secretname"
security_groups = [
"our_security_group"]
key_pair = "our-keypair"
network {
name = "public"
}
metadata {
expire = "2",
owner = ""
}
connection {
type = "ssh"
user = "vagrant"
private_key = "config/vagrant_private.key"
agent = "false"
timeout = "15m"
}
##Create Ansible host in staging inventory
provisioner "local-exec" {
command = "echo -e '\n[web]\n${openstack_compute_instance_v2.my-deployment-web.network.0.floating_ip}' > ../ansible/inventories/staging/hosts"
interpreter = ["sh", "-c"]
}
}
The host file generated only gets [web] but no ip. Anyone know why?
[web]
Modifying the variable from
${openstack_compute_instance_v2.my-deployment-web.network.0.floating_ip}
to
${openstack_compute_instance_v2.my-deployment-web.network.0.access_ip_v4}
solved the problem. Thank you #Matt Schuchard