terraform not working with remote exec for tpl script - terraform

I have a simple aws ec2 instance as below
resource "aws_instance" "App01" {
##ami = "ami-846144f8"
ami = "${data.aws_ami.aws_linux.id}"
instance_type = "t1.micro"
subnet_id = "${aws_subnet.public_1a.id}"
associate_public_ip_address = true
vpc_security_group_ids = ["${aws_security_group.web_server.id}","${aws_security_group.allow_ssh.id}"]
key_name = "key"
provisioner "remote-exec"{
inline = ["${template_file.bootstrap.rendered}"]
}
tags {
Name = "App01"
}
}
data "aws_ami" "aws_linux" {
most_recent = true
filter {
name = "name"
values = ["amzn2-ami-*-x86_64-gp2"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
filter {
name = "owner-alias"
values = ["amazon"]
}
}
resource "template_file" "bootstrap" {
template = "${file("bootstrap.tpl")}"
vars {
app01ip = "${aws_instance.App01.private_ip}"
app02ip = "${aws_instance.App02.private_ip}"
DBandMQip = "${aws_instance.DBandMQ.private_ip}"
}
}
This is my tbl script
#!/bin/bash -xe
# install necessary items like ansible and
sudo yum-config-manager --enable epel
sudo amazon-linux-extras install ansible2
echo "${app01ip} App01" > /etc/hosts
echo "${app02ip} App02" > /etc/hosts
echo "${DBandMQip} DBandMQ" > /etc/hosts
I keep getting a
Error: Error asking for user input: 1 error(s) occurred:
* Cycle: aws_instance.App01, template_file.bootstrap
I believe its coming from the resource portion for remote-exec but I am unsure whats wrong because it looks fine to me. Anyone has any idea what I am doing wrong?

Related

What to do if instance creating and cloud-config in separate session in terraform?

I was able to manually create this device instance in OpenStack, and now I am trying to see how to make it work by terraform.
Anyway, this device instance need to do a hard reboot after the volume attachment, and Any cloud-config needs be down after rebooting. Here is the general sketch of my current main.tf file.
# Configure the OpenStack Provider
terraform {
required_providers {
openstack = {
source = "terraform-provider-openstack/openstack"
}
}
}
data "template_file" "user_data"{
template = file("./userdata.yaml")
}
# create an instance
resource "openstack_compute_instance_v2" "server" {
name = "Device_Instance"
image_id = "xxx..."
image_name = "device_vmdk_1"
flavor_name = "m1.tiny"
key_pair = "my-keypair"
region = "RegionOne"
config_drive = true
network {
name = "main_network"
}
}
resource "openstack_blockstorage_volume_v2" "volume_2" {
name = "device_vmdk_2"
size = 1
image_id = "xxx...."
}
resource "openstack_blockstorage_volume_v3" "volume_3" {
name = "device_vmdk_3"
size = 1
image_id = "xxx..."
}
resource "openstack_compute_volume_attach_v2" "va_1" {
instance_id = "${openstack_compute_instance_v2.server.id}"
volume_id = "${openstack_blockstorage_volume_v2.volume_2.id}"
}
resource "openstack_compute_volume_attach_v2" "va_2" {
instance_id = "${openstack_compute_instance_v2.server.id}"
volume_id = "${openstack_blockstorage_volume_v3.volume_3.id}"
}
resource "null_resource" "reboot_instance" {
provisioner "local-exec" {
on_failure = fail
interpreter = ["/bin/bash", "-c"]
command = <<EOT
openstack server reboot --hard Device_Instance --insecure
echo "................"
EOT
}
depends_on = [openstack_compute_volume_attach_v2.va_1, openstack_compute_volume_attach_v2.va_2]
}
resource "openstack_compute_instance_v2" "server_config" {
name = "Device_Instance"
user_data = data.template_file.user_data.rendered
depends_on = [null_resource.reboot_instance]
}
As of now, it was able to:
have the "Device-Cloud-Instance" generated.
have the "Device-Cloud-Instance" hard-rebooted.
But it fails after rooting. As you may find, I have added this section in the end, but it seems not working.
resource "openstack_compute_instance_v2" "server_config"{}
Any ideas how to make it work?
Thanks,
Jack

Unable to provision with file multiple time

While trying to provision with file multiple times, second occurance is not being considered. Not sure if I'm doing it correctly.
Please throw some light !
The below block works perfectly -
source = "/home/ubuntu/Desktop/aws_migration_using_terraform/tcs-btag-account_us-east-2/aws_infra_automation"
destination = "/home/ubuntu"
}
However, this one didn't work and there is no error thrown by terraform itself !
source = "/home/ubuntu/Desktop/aws_migration_using_terraform/tcs-btag-account_us-east-2/livedevops"
destination = "/home/ubuntu"
}
The entire code is given below --
resource "tls_private_key" "bastion-key" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "aws_key_pair" "generated_key" {
key_name = var.bastion_key
public_key = tls_private_key.bastion-key.public_key_openssh
}
resource "aws_instance" "bastion_host_us-east-2a" {
ami = var.bastion_ami_id
instance_type = var.bastion_ec2_instance_type
disable_api_termination = false
subnet_id = aws_subnet.devops_mig_pub_sub_01.id
vpc_security_group_ids = [aws_security_group.sg-btag-allow.id, aws_security_group.sg-ssh-allow.id]
associate_public_ip_address = true
availability_zone = aws_subnet.devops_mig_pub_sub_01.availability_zone
key_name = aws_key_pair.generated_key.id
connection {
type = "ssh"
host = self.public_ip
user = "ubuntu"
port = 22
private_key = tls_private_key.bastion-key.private_key_pem
timeout = "60s"
}
#Copying files from local to remote
provisioner "file" {
source = "/home/ubuntu/Desktop/aws_migration_using_terraform/tcs-btag-account_us-east-2/aws_infra_automation"
destination = "/home/ubuntu"
}
provisioner "file" {
source = "/home/ubuntu/Desktop/aws_migration_using_terraform/tcs-btag-account_us-east-2/livedevops"
destination = "/home/ubuntu"
}
user_data = <<-EOF
#!/bin/bash
sudo apt update -y
sudo apt install -y software-properties-common
sudo add-apt-repository --yes --update ppa:ansible/ansible
sudo apt update -y
sudo apt install -y ansible
/usr/bin/ansible --version > ansible-v.txt
echo "Installing the cloudwatch agent for Ubuntu Linux."
curl -O https://s3.amazonaws.com/amazoncloudwatch-agent/ubuntu/amd64/latest/amazon-cloudwatch-agent.deb
dpkg -i -E ./amazon-cloudwatch-agent.deb
EOF
tags = {
"Name" = "bastion_host"
}
}
output "private_key" {
value = tls_private_key.bastion-key.private_key_pem
sensitive = true
}
output "bastion_public_ip" {
value = aws_instance.bastion_host_us-east-2a.public_ip
}
output "bastion_private_ip" {
value = aws_instance.bastion_host_us-east-2a.private_ip
}
resource "aws_ebs_volume" "bastion_storage" {
availability_zone = var.bastion-ebs-availability-zone
size = 50
type = "gp2"
tags = {
"Name" = "bastion_ebs_volume"
}
}
resource "local_file" "bastion_private_key" {
content = tls_private_key.bastion-key.private_key_pem
filename = "bastion-key.pem"
file_permission = "0400"
}
I see ubuntu being the user used to SSH to target machine. It's a bad idea to copy files directly to HOME directory of the user & in this case the file provisioner is just replacing everything available on /home/ubuntu directory.
The above directory also contains your SSH public keys used for authentication in ~/.ssh/authorized_keys. That's the reason it's failing at the second file provisioner.
You create a tmp directory under /home/ubuntu or use /tmp or /var/tmp directories if they allow ubuntu user to write something to write.

terraform, pass on user data only if variable is provided

I have the following
I have a AWS EC2 instance I want to pass on user data, but only if a variable neccessary for the userdata was provided by terraform apply.
I tried various ways but I cannot get to my goal
step 1:
resource "aws_instance" "publisher_instance" {
ami = var.publisher_instance_ami
instance_type = var.publisher_instance_type
subnet_id = "${aws_subnet.subnet2.id}"
key_name = var.key_name
vpc_security_group_ids = ["${aws_security_group.publisher_security_group.id}"]
tags = {
Name = "${local.workspace["name"]}-Test"
}
user_data = <<EOF
#!/bin/bash
/home/centos/launch -token ${var.token}
yum update -y
EOF
}
As you can see I only want to pass on user_data if the var.token was provided while applying
I then tried to put the user_data into a data object like
data "template_cloudinit_config" "userdata" {
gzip = false
base64_encode = false
part {
content_type = "text/x-shellscript"
content = <<-EOF
#!/bin/bash
/home/centos/launch -token ${var.token}
yum update -y
EOF
}
}
and tried this
user_data =
${data.template_cloudinit_config.userdata.rendered}"
but I cannot figure out how I can put this into a condition.
Can you help me?
thanks
Use the ternary operator, and pass null if there is no token:
user_data = length(var.token) == 0 ? null : data.template_cloudinit_config.userdata.rendered

FluentBit setup

I'm trying to set up FluentBit for my EKS cluster in Terraform, via this module, and I have couple of questions:
cluster_identity_oidc_issuer - what is this? Frankly, I was just told to set this up, so I have very little knowledge about FluentBit, but I assume this "issuer" provides an identity with needed permissions. For example, Okta? We use Okta, so what would I use as a value in here?
cluster_identity_oidc_issuer_arn - no idea what this value is supposed to be.
worker_iam_role_name - as in the role with autoscaling capabilities (oidc)?
This is what eks.tf looks like:
module "eks" {
source = "terraform-aws-modules/eks/aws"
cluster_name = "DevOpsLabs"
cluster_version = "1.19"
cluster_endpoint_private_access = true
cluster_endpoint_public_access = true
cluster_addons = {
coredns = {
resolve_conflicts = "OVERWRITE"
}
kube-proxy = {}
vpc-cni = {
resolve_conflicts = "OVERWRITE"
}
}
vpc_id = "xxx"
subnet_ids = ["xxx","xxx", "xxx", "xxx" ]
self_managed_node_groups = {
bottlerocket = {
name = "bottlerocket-self-mng"
platform = "bottlerocket"
ami_id = "xxx"
instance_type = "t2.small"
desired_size = 2
iam_role_additional_policies = ["arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"]
pre_bootstrap_user_data = <<-EOT
echo "foo"
export FOO=bar
EOT
bootstrap_extra_args = "--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=spot'"
post_bootstrap_user_data = <<-EOT
cd /tmp
sudo yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm
sudo systemctl enable amazon-ssm-agent
sudo systemctl start amazon-ssm-agent
EOT
}
}
}
And for the role.tf:
data "aws_iam_policy_document" "cluster_autoscaler" {
statement {
effect = "Allow"
actions = [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeAutoScalingInstances",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:DescribeTags",
"autoscaling:SetDesiredCapacity",
"autoscaling:TerminateInstanceInAutoScalingGroup",
"ec2:DescribeLaunchTemplateVersions",
]
resources = ["*"]
}
}
module "config" {
source = "github.com/ahmad-hamade/terraform-eks-config/modules/eks-iam-role-with-oidc"
cluster_name = module.eks.cluster_id
role_name = "cluster-autoscaler"
service_accounts = ["kube-system/cluster-autoscaler"]
policies = [data.aws_iam_policy_document.cluster_autoscaler.json]
tags = {
Terraform = "true"
Environment = "dev-test"
}
}
Since you are using a Terraform EKS module, you can access attributes of the created resources by looking at the Outputs tab [1]. There you can find the following outputs:
cluster_id
cluster_oidc_issuer_url
oidc_provider_arn
They are accessible by using the following syntax:
module.<module_name>.<output_id>
In your case, you would get the values you need using the following syntax:
cluster_id -> module.eks.cluster_id
cluster_oidc_issuer_url -> module.eks.cluster_oidc_issuer_url
oidc_provider_arn -> module.eks.oidc_provider_arn
and assign them to the inputs from the FluentBit module:
cluster_name = module.eks.cluster_id
cluster_identity_oidc_issuer = module.eks.cluster_oidc_issuer_url
cluster_identity_oidc_issuer_arn = module.eks.oidc_provider_arn
For the worker role I didn't see an output from the eks module, so I think that could be an output of the config module [2]:
worker_iam_role_name = module.config.iam_role_name
The OIDC parts of configuration are coming from the EKS cluster [3]. Another blog post going in details can be found here [4].
[1] https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest?tab=outputs
[2] https://github.com/ahmad-hamade/terraform-eks-config/blob/master/modules/eks-iam-role-with-oidc/outputs.tf
[3] https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html
[4] https://aws.amazon.com/blogs/containers/introducing-oidc-identity-provider-authentication-amazon-eks/

Running local-exec provisioner on all EC2 instances after creation

I currently have a Terraform file to create EC2 instances on AWS that looks like this:
resource "aws_instance" "influxdata" {
count = "${var.ec2-count-influx-data}"
ami = "${module.amis.rhel73_id}"
instance_type = "${var.ec2-type-influx-data}"
vpc_security_group_ids = ["${var.sg-ids}"]
subnet_id = "${element(module.infra.subnet,count.index)}"
key_name = "${var.KeyName}"
iam_instance_profile = "Custom-role"
tags {
Name = "influx-data-node"
ASV = "${module.infra.ASV}"
CMDBEnvironment = "${module.infra.CMDBEnvironment}"
OwnerContact = "${module.infra.OwnerContact}"
custodian_downtime = "off"
OwnerEid = "${var.OwnerEid}"
}
ebs_block_device {
device_name = "/dev/sdg"
volume_size = 500
volume_type = "io1"
iops = 2000
encrypted = true
delete_on_termination = true
}
user_data = "${file("terraform/attach_ebs.sh")}"
connection {
private_key = "${file("/Users/usr111/Downloads/usr111_CD.pem")}"
user = "ec2-user"
}
provisioner "remote-exec" {
inline = ["echo just checking for ssh. ttyl. bye."]
}
provisioner "local-exec" {
command = <<EOF
ansible-playbook base-data.yml --key-file=/Users/usr111/Downloads/usr111_CD.pem --user=ec2-user -b -i "${self.private_ip},"
EOF
}
}
resource "aws_route53_record" "influx-data-route" {
count = "${var.ec2-count-influx-data}"
zone_id = "${var.r53-zone}"
name = "influx-data-0${count.index}"
type = "A"
ttl = "300"
// matches up record N to instance N
records = ["${element(aws_instance.influxdata.*.private_ip, count.index)}"]
}
resource "local_file" "inventory-meta" {
filename = "inventory"
content = <<-EOF
[meta]
${join("\n",aws_instance.influxmeta.*.private_ip)}
[data]
${join("\n",aws_instance.influxdata.*.private_ip)}
EOF
}
What I'm struggling to figure out is to get this part to run after I create the inventory file:
provisioner "local-exec" {
command = <<EOF
ansible-playbook base-data.yml --key-file=/Users/usr111/Downloads/usr111_CD.pem --user=ec2-user -b -i "${self.private_ip},"
EOF
}
Right now I'm passing an IP into Ansible but I want to pass in the inventory file, which is only created after Terraform provisions all of the instances.
since you are using AWS maybe you could try using the Dynamic Inventory script and your provisioner could look like this:
provisioner "local-exec" {
command = "ansible-playbook -i ec2.py playbook.yml --limit ${self.public_ip}" }
In your playbook you are going to need to wait for SSH to become available since Ansible is making the connection and not Terraform.
- name: wait for ssh
hosts: localhost
gather_facts: no
tasks:
- local_action: wait_for port=22 host="{{ ip }}" search_regex=OpenSSH delay=10
So the command should look like this:
provisioner "local-exec" {
command = "ansible-playbook -i ec2.py playbook.yml --limit ${self.public_ip}" --extra-vars 'ip=${self.public_ip}'}
You can also copy your playbooks to the host with the "File Provisioner", install ansible and run the playbook locally with "remote-exec", but that's up to you.

Resources