the server could not find the requested resource (post namespaces) over ssh - terraform

Hello I have a terraform proyect, currently I download the proyect in the host and I use terraform apply there, but I want to try to make this using ssh, in order to do that I have the following code:
resource "kubernetes_namespace" "main" {
connection {
type = "ssh"
user = "root"
private_key = var.private_key
host = var.host
host_key = var.host_key
}
metadata {
name = var.namespace
}
}
I didn't have password, because I only use ssh -i privatekey user#host to get access to that host.
But I get the following error:
Error: the server could not find the requested resource (post namespaces)
the provider is the following:
provider "kubernetes" {
config_path = "~/.kube/config"
config_context = "microk8s"
}
and the file and the context are correct.
EDIT
Terraform looks kubeconfig in the localhost instead of the remote host.
How can I solve this and apply changes remotely using ssh?
Thanks

Related

ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain

I'm attempting to install Nginx on an ec2 instance using the Terraform provisioner remote-exec but I keep running into this error.
ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
This is what my code looks like
resource "aws_instance" "nginx" {
ami = data.aws_ami.aws-linux.id
instance_type = "t2.micro"
key_name = var.key_name
vpc_security_group_ids = [aws_security_group.allow_ssh.id]
connection {
type = "ssh"
host = self.public_ip
user = "ec2-user"
private_key = file(var.private_key_path)
}
provisioner "remote-exec" {
inline = [
"sudo yum install nginx -y",
"sudo service nginx start"
]
}
}
Security group rules are set up to allow ssh from anywhere.
And I'm able to ssh into the box from my local machine.
Not sure if I'm missing really obvious here. I've tried a newer version of Terraform but it's the same issue.
If your EC2 instance is using an AMI for an operating system that uses cloud-init (the default images for most Linux distributions do) then you can avoid the need for Terraform to log in over SSH at all by using the user_data argument to pass a script to cloud-init:
resource "aws_instance" "nginx" {
ami = data.aws_ami.aws-linux.id
instance_type = "t2.micro"
key_name = var.key_name
vpc_security_group_ids = [aws_security_group.allow_ssh.id]
user_data = <<-EOT
yum install nginx -y
service nginx start
EOT
}
For an operating system that includes cloud-init, the system will run cloud-init as part of the system startup and it will access the metadata and user data API to retrieve the value of user_data. It will then execute the contents of the script, writing any messages from that operation into the cloud-init logs.
What I've described above is the official recommendation for how to run commands to set up your compute instance. The documentation says that provisioners are a last resort, and one of the reasons given is to avoid the extra complexity of having to correctly configure SSH connectivity and authentication, which is the very complexity that has caused you to ask this question and so I think trying to follow the advice in the documentation is the best way to address it.

Terragrunt + Terraform with modules + GITLab

I'm using my infrastructure (IAC) at aws with terragrunt + terraform.
I already added the ssh key, GPG key to the git lab and left the repository unprotected in the branch, to do a test, but it didn't work
This would be the module call, coming to be equal to the main.tf of terraform.
# ---------------------------------------------------------------------------------------------------------------------
# Configuração do Terragrunt
# ---------------------------------------------------------------------------------------------------------------------
terragrunt = {
terraform {
source = "git::ssh://git#gitlab.compamyx.com.br:2222/x/terraform-blueprints.git//route53?ref=0.3.12"
}
include = {
path = "${find_in_parent_folders()}"
}
}
# ---------------------------------------------------------------------------------------------------------------------
# Parâmetros da Blueprint
#
zone_id = "ZDU54ADSD8R7PIX"
name = "k8s"
type = "CNAME"
ttl = "5"
records = ["tmp-elb.com"]
The point is that when I give an init terragrunt, in one of the modules I have the following error:
ssh: connect to host gitlab.company.com.br port 2222: Connection timed out
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
[terragrunt] 2020/02/05 15:23:18 Hit multiple errors:
exit status 1
I took the test
ssh -vvvv -T gitlab.companyx.com.br -p 2222
I also got timed out
This doesn't appear to be a terragrunt or terraform issue at all, but rather, an issue with SSH access to the server.
If you are getting a timeout, it seems like it's most likely a connectivity issue (i.e., a firewall/network ACL is blocking access on that port from where you are attempting to access it).
If it were an SSH key issue, you'd get an "access denied" or similar issue, but the timeout definitely leads me to believe it's connectivity.

Unable to execute "remote-exec" provisioner in Azure terraform

I am trying to execute remote-exec provisioner when deploying a VM in Azure but inline code in remote-exec never executes.
Here is my provisioner and connection code:
provisioner "remote-exec" {
inline = [
"touch newfile.txt",
"touch newfile2.txt",
]
}
connection {
type = "ssh"
host = "${azurerm_public_ip.publicip.ip_address}"
user = "testuser"
private_key = "${file("~/.ssh/id_rsa")}"
agent = false
}
Code never executes and gives the error:
Error: Failed to read ssh private key: no key found
The key (id_rsa) is saved in the same location of the VM where I am running the main.tf file.
Please suggest what is wrong here.
As #ydaetskcoR comment, your code private_key = "${file("~/.ssh/id_rsa")}" indicated that the private key should exist at .ssh/id_rsa under your home directoty like /home/username on linux or C:\Users\username on windows.
You could save that key (id_rsa) in that directory as your code, otherwise, you need to add the current path of the key in your code.
For example, edit it to private_key = "${file("${path.module}/id_rsa")}"

Providing Terraform with credentials in terraform files instead of env variable

I have set-up a terraform project with a remote back-end on GCP. Now when I want to deploy the infrastructure, I run into issues with credentials. I have a credentials file in
\home\mike\.config\gcloud\credentials.json
In my terraform project I have the following data referring to the remote state:
data "terraform_remote_state" "project_id" {
backend = "gcs"
workspace = "${terraform.workspace}"
config {
bucket = "${var.bucket_name}"
prefix = "${var.prefix_project}"
}
}
and I specify the cloud provider with a the details of my credentials file.
provider "google" {
version = "~> 1.16"
project = "${data.terraform_remote_state.project_id.project_id}"
region = "${var.region}"
credentials = "${file(var.credentials)}"
}
However, this runs into
data.terraform_remote_state.project_id: data.terraform_remote_state.project_id:
error initializing backend:
storage.NewClient() failed: dialing: google: could not find default
credentials.
if I add
export GOOGLE_APPLICATION_CREDENTIALS=/home/mike/.config/gcloud/credentials.json
I do get it to run as desired. My issue is that I would like to specify the credentials in the terraform files as I am running the terraform commands in an automated way from a python script where I cannot set the environment variables. How can I let terraform know where the credentials are without setting the env variable?
I was facing the same error when trying to run terraform (version 1.1.5) commands in spite of having successfully authenticated via gcloud auth login.
Error message in my case:
Error: storage.NewClient() failed: dialing: google: could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
It turned out that I had to also authenticate via gcloud auth application-default login and was able to run terraform commands thereafter.
I figured this out in the end.
Also the data needs to have the credentials.
E.g.
data "terraform_remote_state" "project_id" {
backend = "gcs"
workspace = "${terraform.workspace}"
config = {
bucket = "${var.bucket_name}"
prefix = "${var.prefix_project}"
credentials = "${var.credentials}" <- added
}
}

Terraform cannot ssh into EC2 instance to upload files

I am trying to get a basic terraform example up and running and then push a very simple flask application in a docker container there. The script all works if I remove the file provisioner section and the user data section. The pem file is in the same location on my disk as the main.tf script and the terraform.exe file.
If I leave the file provisioner in then the script fails with the following error:
Error: Error applying plan:
1 error(s) occurred:
* aws_launch_configuration.example: 1 error(s) occurred:
* dial tcp :22: connectex: No connection could be made because the target machine actively refused it.
If I remove the file provisioning section the script runs fine and I can ssh into the created instance using my private key so the key_name part seems to be working ok, I think its to do with the file provisioner trying to connect to add my files.
Here is my launch configuration from my script, I have tried using the connection block which I got from another post online but I cant see what I am doing wrong.
resource "aws_launch_configuration" "example" {
image_id = "${lookup(var.eu_west_ami, var.region)}"
instance_type = "t2.micro"
key_name = "Terraform-python"
security_groups = ["${aws_security_group.instance.id}"]
provisioner "file" {
source = "python/hello_flask.py"
destination = "/home/ec2-user/hello_flask.py"
connection {
type = "ssh"
user = "ec2-user"
private_key = "${file("Terraform-python.pem")}"
timeout = "2m"
agent = false
}
}
provisioner "file" {
source = "python/flask_dockerfile"
destination = "/home/ec2-user/flask_dockerfile"
connection {
type = "ssh"
user = "ec2-user"
private_key = "${file("Terraform-python.pem")}"
timeout = "2m"
agent = false
}
}
user_data = <<-EOF
#!/bin/bash
sudo yum update -y
sudo yum install -y docker
sudo service docker start
sudo usermod -a -G docker ec2-user
sudo docker build -t flask_dockerfile:latest /home/ec2-user/flask_dockerfile
sudo docker run -d -p 5000:5000 flask_dockerfile
EOF
lifecycle {
create_before_destroy = true
}
}
It is probably something very simple and stupid that I am doing, thanks in advance for anyone that takes a look.
aws_launch_configuration is not an actual EC2 instance but just a 'template' to launch instances. Thus, it is not possible to connect to it via SSH.
To copy those file you have two options:
Creating a custom AMI. For that, you can use Packer or Terraform itself, launching an EC2 instance with aws_instance and these file provisioners, and creating an AMI from it with aws_ami
The second one is not a best practice but if the files are short, you can include them in the user_data.

Resources