Terragrunt + Terraform with modules + GITLab - gitlab

I'm using my infrastructure (IAC) at aws with terragrunt + terraform.
I already added the ssh key, GPG key to the git lab and left the repository unprotected in the branch, to do a test, but it didn't work
This would be the module call, coming to be equal to the main.tf of terraform.
# ---------------------------------------------------------------------------------------------------------------------
# Configuração do Terragrunt
# ---------------------------------------------------------------------------------------------------------------------
terragrunt = {
terraform {
source = "git::ssh://git#gitlab.compamyx.com.br:2222/x/terraform-blueprints.git//route53?ref=0.3.12"
}
include = {
path = "${find_in_parent_folders()}"
}
}
# ---------------------------------------------------------------------------------------------------------------------
# Parâmetros da Blueprint
#
zone_id = "ZDU54ADSD8R7PIX"
name = "k8s"
type = "CNAME"
ttl = "5"
records = ["tmp-elb.com"]
The point is that when I give an init terragrunt, in one of the modules I have the following error:
ssh: connect to host gitlab.company.com.br port 2222: Connection timed out
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
[terragrunt] 2020/02/05 15:23:18 Hit multiple errors:
exit status 1
I took the test
ssh -vvvv -T gitlab.companyx.com.br -p 2222
I also got timed out

This doesn't appear to be a terragrunt or terraform issue at all, but rather, an issue with SSH access to the server.
If you are getting a timeout, it seems like it's most likely a connectivity issue (i.e., a firewall/network ACL is blocking access on that port from where you are attempting to access it).
If it were an SSH key issue, you'd get an "access denied" or similar issue, but the timeout definitely leads me to believe it's connectivity.

Related

azurerm_virtual_machine (remote-exec): (output suppressed due to sensitive value in config) Terraform output

I'm looking for a way to see what is going on during creation of a virtual machine since I use complex cluster configuration and to test if its working I need to be able to see the output and in some cases I'm not due to sensitive. This is related to running remote-exec option
module.MongoInstall.azurerm_virtual_machine.MongoVirtualMachine[2] (remote-exec): (output suppressed due to sensitive value in config)
could you please help me
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/mongo-activate.sh",
"cd /tmp",
"sudo ./mongo-activate.sh ${var.username} ${var.vmpassword} ${var.mongopassword} ${local.isCluster} ${join("," ,azurerm_public_ip.MongoPublicIpAddress.*.fqdn)} ${var.hasArbiter}",
"rm mongo-activate.sh",
]
connection {
type = "ssh"
host = "${element(azurerm_public_ip.MongoPublicIpAddress.*.ip_address, 0)}"
user = "${var.username}"
password = var.vmpassword
timeout = "15m"
}
}
Example of variables:
variable "vmpassword" {
default = "testtesttest" //psw:mongo VM
}
Thank You Adnriy for your suggestion.
Yes we can not see the sesitive value in console like password and Key after terraform apply .Terraform will suppress logging from the provisioner.If provisioner configuration or connection info includes sensitive values, we need to unmark them before calling the provisioner. Failing to do so causes serialization to error.
Like you can see in below image

Using terraform in an AWS SSO+Okta environment

I'm using SSO in an AWS SSO+Control Tower+Okta environment
When I login to AWS via Okta, I use the Option 1 setting to allow me to use the aws command
Get credentials for AdministratorAccess
When I run the terraform plan, I get the following error
There is no problem with terraform init.
【terraform plan error】
╷
│ Error: AccessDenied: Access Denied
│ status code: 403, request id: QN738HDZPQKMERFX, host id: roylHCGU3cOMfwkWjdpbeG991Ho28bredvY1/6vSgGavaM/VXn6rNtDSGIpnBS2cqetL38YdF1o=
│
│
╵
I thought the above error might be due to the fact that I cannot access the terraform.tfstate that I have set in backend.tf, but the following command completes successfully
【backend.tf】
terraform {
backend "s3" {
bucket = "test-tfstate2"
key = "provisioning/test/static/production/terraform.tfstate"
region = "ap-northeast-1"
workspace_key_prefix = ""
}
}
【command】
aws s3 ls s3://test-tfstate2/provisioning/test/static/production/terraform.tfstate
【Result】
2022-02-17 18:18:05 0 terraform.tfstate
What is the cause of the AccessDenied error in this situation?
Any advice would be appreciated.
In order to troubleshoot this issue further and find the root cause of the problem you can execute:
TF_LOG=DEBUG terraform plan
This should give you exact reason while plan is failing. I suspect it's due to the permission issue: "Validate Response s3/ListObjects failed" but we need to confirm it first by running plan with the DEBUG option.
It could also happen that your terraform uses default credentials from ~/.aws/credentials file. That's why when you execute aws ls s3 ... manually - it works, but it doesn't work with terraform.
To avoid this, please use option 2 from the guide you provided (by creating additional configuration block in your ~/.aws/credentials file.
Then you can do export AWS_PROFILE={name_of_your_new_profile} and then try to execute terraform plan once again.
If all of this will not work, please update your question with the DEBUG's output.
Best of luck :)

How can I fix Failed to query available provider packages when doing local provider development with terraform init?

Context: I'm developing a new TF provider. In order to test it, I use the following piece of advice from TF docs:
provider_installation {
# Use /home/developer/tmp/terraform-null as an overridden package directory
# for the hashicorp/null provider. This disables the version and checksum
# verifications for this provider and forces Terraform to look for the
# null provider plugin in the given directory.
dev_overrides {
"hashicorp/null" = "/home/developer/tmp/terraform-null"
}
# For all other providers, install them directly from their origin provider
# registries as normal. If you omit this, Terraform will _only_ use
# the dev_overrides block, and so no other providers will be available.
direct {}
}
And when I run terraform plan / terraform apply my provider does work without any issues. However when I try to run terraform init I'm running into:
Error: Failed to query available provider packages
Could not retrieve the list of available versions for provider
hashicorp/null: could not connect to hashicorp: Failed
to request discovery document: Get
"https://hashicorp/.well-known/terraform.json": dial tcp: lookup hashicorp on
100.217.9.1:53: no such host
Is there a way I could fix it?
For the context, my main.tf file starts with
terraform {
required_providers {
null = {
source = "hashicorp/null"
}
}
}
When I googled around, I found a related blog post and terraform plan seems to work for the author since he doesn't uses other plugins which is not the case for me unfortunately.
This issue on GitHub seems to show the same issues.

ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain

I'm attempting to install Nginx on an ec2 instance using the Terraform provisioner remote-exec but I keep running into this error.
ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
This is what my code looks like
resource "aws_instance" "nginx" {
ami = data.aws_ami.aws-linux.id
instance_type = "t2.micro"
key_name = var.key_name
vpc_security_group_ids = [aws_security_group.allow_ssh.id]
connection {
type = "ssh"
host = self.public_ip
user = "ec2-user"
private_key = file(var.private_key_path)
}
provisioner "remote-exec" {
inline = [
"sudo yum install nginx -y",
"sudo service nginx start"
]
}
}
Security group rules are set up to allow ssh from anywhere.
And I'm able to ssh into the box from my local machine.
Not sure if I'm missing really obvious here. I've tried a newer version of Terraform but it's the same issue.
If your EC2 instance is using an AMI for an operating system that uses cloud-init (the default images for most Linux distributions do) then you can avoid the need for Terraform to log in over SSH at all by using the user_data argument to pass a script to cloud-init:
resource "aws_instance" "nginx" {
ami = data.aws_ami.aws-linux.id
instance_type = "t2.micro"
key_name = var.key_name
vpc_security_group_ids = [aws_security_group.allow_ssh.id]
user_data = <<-EOT
yum install nginx -y
service nginx start
EOT
}
For an operating system that includes cloud-init, the system will run cloud-init as part of the system startup and it will access the metadata and user data API to retrieve the value of user_data. It will then execute the contents of the script, writing any messages from that operation into the cloud-init logs.
What I've described above is the official recommendation for how to run commands to set up your compute instance. The documentation says that provisioners are a last resort, and one of the reasons given is to avoid the extra complexity of having to correctly configure SSH connectivity and authentication, which is the very complexity that has caused you to ask this question and so I think trying to follow the advice in the documentation is the best way to address it.

Terraform cannot ssh into EC2 instance to upload files

I am trying to get a basic terraform example up and running and then push a very simple flask application in a docker container there. The script all works if I remove the file provisioner section and the user data section. The pem file is in the same location on my disk as the main.tf script and the terraform.exe file.
If I leave the file provisioner in then the script fails with the following error:
Error: Error applying plan:
1 error(s) occurred:
* aws_launch_configuration.example: 1 error(s) occurred:
* dial tcp :22: connectex: No connection could be made because the target machine actively refused it.
If I remove the file provisioning section the script runs fine and I can ssh into the created instance using my private key so the key_name part seems to be working ok, I think its to do with the file provisioner trying to connect to add my files.
Here is my launch configuration from my script, I have tried using the connection block which I got from another post online but I cant see what I am doing wrong.
resource "aws_launch_configuration" "example" {
image_id = "${lookup(var.eu_west_ami, var.region)}"
instance_type = "t2.micro"
key_name = "Terraform-python"
security_groups = ["${aws_security_group.instance.id}"]
provisioner "file" {
source = "python/hello_flask.py"
destination = "/home/ec2-user/hello_flask.py"
connection {
type = "ssh"
user = "ec2-user"
private_key = "${file("Terraform-python.pem")}"
timeout = "2m"
agent = false
}
}
provisioner "file" {
source = "python/flask_dockerfile"
destination = "/home/ec2-user/flask_dockerfile"
connection {
type = "ssh"
user = "ec2-user"
private_key = "${file("Terraform-python.pem")}"
timeout = "2m"
agent = false
}
}
user_data = <<-EOF
#!/bin/bash
sudo yum update -y
sudo yum install -y docker
sudo service docker start
sudo usermod -a -G docker ec2-user
sudo docker build -t flask_dockerfile:latest /home/ec2-user/flask_dockerfile
sudo docker run -d -p 5000:5000 flask_dockerfile
EOF
lifecycle {
create_before_destroy = true
}
}
It is probably something very simple and stupid that I am doing, thanks in advance for anyone that takes a look.
aws_launch_configuration is not an actual EC2 instance but just a 'template' to launch instances. Thus, it is not possible to connect to it via SSH.
To copy those file you have two options:
Creating a custom AMI. For that, you can use Packer or Terraform itself, launching an EC2 instance with aws_instance and these file provisioners, and creating an AMI from it with aws_ami
The second one is not a best practice but if the files are short, you can include them in the user_data.

Resources