terraform, pass on user data only if variable is provided - terraform

I have the following
I have a AWS EC2 instance I want to pass on user data, but only if a variable neccessary for the userdata was provided by terraform apply.
I tried various ways but I cannot get to my goal
step 1:
resource "aws_instance" "publisher_instance" {
ami = var.publisher_instance_ami
instance_type = var.publisher_instance_type
subnet_id = "${aws_subnet.subnet2.id}"
key_name = var.key_name
vpc_security_group_ids = ["${aws_security_group.publisher_security_group.id}"]
tags = {
Name = "${local.workspace["name"]}-Test"
}
user_data = <<EOF
#!/bin/bash
/home/centos/launch -token ${var.token}
yum update -y
EOF
}
As you can see I only want to pass on user_data if the var.token was provided while applying
I then tried to put the user_data into a data object like
data "template_cloudinit_config" "userdata" {
gzip = false
base64_encode = false
part {
content_type = "text/x-shellscript"
content = <<-EOF
#!/bin/bash
/home/centos/launch -token ${var.token}
yum update -y
EOF
}
}
and tried this
user_data =
${data.template_cloudinit_config.userdata.rendered}"
but I cannot figure out how I can put this into a condition.
Can you help me?
thanks

Use the ternary operator, and pass null if there is no token:
user_data = length(var.token) == 0 ? null : data.template_cloudinit_config.userdata.rendered

Related

Cloudinit file inside terraform config file not working

I'm trying to run a cloudinit file by passing it in the terraform config file. The terraform apply command creates all the resources. But when i spin up the VM, none of the changes from the cloudinit are seen in the VM.
Here is the Cloudinit file with .tpl extension:
users:
- name: ansible
gecos: Ansible
sudo: ALL=(ALL) NOPASSWD:ALL
groups: [users, admin]
shell: /bin/bash
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1.......
And here is the main.tf file:
data "template_file" "users_data" {
template = file("./sshPass.tpl")
}
data "template_cloudinit_config" "config" {
gzip = true
base64_encode = true
part {
content_type = "text/cloud-config"
content = data.template_file.users_data.rendered
}
resource "azurerm_linux_virtual_machine" "poc-vm" {
name = var.vm_name
resource_group_name = azurerm_resource_group.poc_rg.name
location = azurerm_resource_group.poc_rg.location
size = var.virtual_machine_size
admin_username = var.vm_username
network_interface_ids = [azurerm_network_interface.poc_nic_1.id]
admin_ssh_key {
username = var.vm_username
public_key = tls_private_key.poc_key.public_key_openssh
}
os_disk {
caching = var.disk_caching
storage_account_type = var.storage_type
}
source_image_reference {
publisher = var.image_publisher
offer = var.image_offer
sku = var.image_sku
version = var.image_version
}
user_data = data.template_cloudinit_config.config.rendered
}
Try this:
data "template_cloudinit_config" "config" {
gzip = true
base64_encode = true
part {
content_type = "text/cloud-config"
content = "${data.template_file.users_data.rendered}"
}
In this example I change this line content = data.template_file.users_data.rendered for this one content = "${data.template_file.users_data.rendered}"
Hope this helps!
Found the errors. Changed the extension to '.cfg.'. Added 'custom_data' instead of user_data.
Added '#cloud-config' to the 1st line of the file.
Made sure I removed any spaces at end of my ssh key.
And also felt like I was using the wrong ssh key to login the whole time.
But anyways those things helped me.

FluentBit setup

I'm trying to set up FluentBit for my EKS cluster in Terraform, via this module, and I have couple of questions:
cluster_identity_oidc_issuer - what is this? Frankly, I was just told to set this up, so I have very little knowledge about FluentBit, but I assume this "issuer" provides an identity with needed permissions. For example, Okta? We use Okta, so what would I use as a value in here?
cluster_identity_oidc_issuer_arn - no idea what this value is supposed to be.
worker_iam_role_name - as in the role with autoscaling capabilities (oidc)?
This is what eks.tf looks like:
module "eks" {
source = "terraform-aws-modules/eks/aws"
cluster_name = "DevOpsLabs"
cluster_version = "1.19"
cluster_endpoint_private_access = true
cluster_endpoint_public_access = true
cluster_addons = {
coredns = {
resolve_conflicts = "OVERWRITE"
}
kube-proxy = {}
vpc-cni = {
resolve_conflicts = "OVERWRITE"
}
}
vpc_id = "xxx"
subnet_ids = ["xxx","xxx", "xxx", "xxx" ]
self_managed_node_groups = {
bottlerocket = {
name = "bottlerocket-self-mng"
platform = "bottlerocket"
ami_id = "xxx"
instance_type = "t2.small"
desired_size = 2
iam_role_additional_policies = ["arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"]
pre_bootstrap_user_data = <<-EOT
echo "foo"
export FOO=bar
EOT
bootstrap_extra_args = "--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=spot'"
post_bootstrap_user_data = <<-EOT
cd /tmp
sudo yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm
sudo systemctl enable amazon-ssm-agent
sudo systemctl start amazon-ssm-agent
EOT
}
}
}
And for the role.tf:
data "aws_iam_policy_document" "cluster_autoscaler" {
statement {
effect = "Allow"
actions = [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeAutoScalingInstances",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:DescribeTags",
"autoscaling:SetDesiredCapacity",
"autoscaling:TerminateInstanceInAutoScalingGroup",
"ec2:DescribeLaunchTemplateVersions",
]
resources = ["*"]
}
}
module "config" {
source = "github.com/ahmad-hamade/terraform-eks-config/modules/eks-iam-role-with-oidc"
cluster_name = module.eks.cluster_id
role_name = "cluster-autoscaler"
service_accounts = ["kube-system/cluster-autoscaler"]
policies = [data.aws_iam_policy_document.cluster_autoscaler.json]
tags = {
Terraform = "true"
Environment = "dev-test"
}
}
Since you are using a Terraform EKS module, you can access attributes of the created resources by looking at the Outputs tab [1]. There you can find the following outputs:
cluster_id
cluster_oidc_issuer_url
oidc_provider_arn
They are accessible by using the following syntax:
module.<module_name>.<output_id>
In your case, you would get the values you need using the following syntax:
cluster_id -> module.eks.cluster_id
cluster_oidc_issuer_url -> module.eks.cluster_oidc_issuer_url
oidc_provider_arn -> module.eks.oidc_provider_arn
and assign them to the inputs from the FluentBit module:
cluster_name = module.eks.cluster_id
cluster_identity_oidc_issuer = module.eks.cluster_oidc_issuer_url
cluster_identity_oidc_issuer_arn = module.eks.oidc_provider_arn
For the worker role I didn't see an output from the eks module, so I think that could be an output of the config module [2]:
worker_iam_role_name = module.config.iam_role_name
The OIDC parts of configuration are coming from the EKS cluster [3]. Another blog post going in details can be found here [4].
[1] https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest?tab=outputs
[2] https://github.com/ahmad-hamade/terraform-eks-config/blob/master/modules/eks-iam-role-with-oidc/outputs.tf
[3] https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html
[4] https://aws.amazon.com/blogs/containers/introducing-oidc-identity-provider-authentication-amazon-eks/

How to change aws_instance with user_data value in terraform?

I have an aws_instance in a terraform file, and I want to tag this instance with a value within my user_data script.
How can I tag my instance with a value of LOGINTOKEN in the user_data script?
Example:
resource "aws_instance" "my_instance" {
ami = "some_ami"
instance_type = "some_instance"
//other configs
user_data = <<EOF
#!/bin/bash
LOGINTOKEN=$(echo { "token": "qwerty12345" } | docker run --rm -i stedolan/jq -r .token)
EOF
tags {
LoginToken = "$LOGINTOKEN"
}
}

terraform not working with remote exec for tpl script

I have a simple aws ec2 instance as below
resource "aws_instance" "App01" {
##ami = "ami-846144f8"
ami = "${data.aws_ami.aws_linux.id}"
instance_type = "t1.micro"
subnet_id = "${aws_subnet.public_1a.id}"
associate_public_ip_address = true
vpc_security_group_ids = ["${aws_security_group.web_server.id}","${aws_security_group.allow_ssh.id}"]
key_name = "key"
provisioner "remote-exec"{
inline = ["${template_file.bootstrap.rendered}"]
}
tags {
Name = "App01"
}
}
data "aws_ami" "aws_linux" {
most_recent = true
filter {
name = "name"
values = ["amzn2-ami-*-x86_64-gp2"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
filter {
name = "owner-alias"
values = ["amazon"]
}
}
resource "template_file" "bootstrap" {
template = "${file("bootstrap.tpl")}"
vars {
app01ip = "${aws_instance.App01.private_ip}"
app02ip = "${aws_instance.App02.private_ip}"
DBandMQip = "${aws_instance.DBandMQ.private_ip}"
}
}
This is my tbl script
#!/bin/bash -xe
# install necessary items like ansible and
sudo yum-config-manager --enable epel
sudo amazon-linux-extras install ansible2
echo "${app01ip} App01" > /etc/hosts
echo "${app02ip} App02" > /etc/hosts
echo "${DBandMQip} DBandMQ" > /etc/hosts
I keep getting a
Error: Error asking for user input: 1 error(s) occurred:
* Cycle: aws_instance.App01, template_file.bootstrap
I believe its coming from the resource portion for remote-exec but I am unsure whats wrong because it looks fine to me. Anyone has any idea what I am doing wrong?

Multiple user_data File use in Terraform

I am trying to have a common user_data file for common tasks such as folder creation and certain package install and a separate user_data file for application specific configuration
I am trying the below -
user_data = "${data.template_file.userdata_common.rendered}", "${data.template_file.userdata_master.rendered}"
With these configs -
Common User Data Template
data "template_file" "userdata_common" {
template = "${file("${path.module}/userdata_common.sh")}"
vars {
"ALBTarget" = "${var.ALBTarget}"
"s3bucket" = "${var.s3bucket}"
"centrifydomain" = "${lookup(var.centrifydomain, format("%s-%s", lower(var.env),var.region))}"
"centrifyadgroup" = "${lookup(var.centrifyadgroup, format("%s-%s", lower(var.env),var.region))}"
}
}
Application Specific Config
data "template_file" "userdata_master" {
template = "${file("${path.module}/userdata_master.sh")}"
vars {
"ALBTarget" = "${var.ALBTarget}"
"s3bucket" = "${var.s3bucket}"
"centrifydomain" = "${lookup(var.centrifydomain, format("%s-%s", lower(var.env),var.region))}"
"centrifyadgroup" = "${lookup(var.centrifyadgroup, format("%s-%s", lower(var.env),var.region))}"
}
}
I get the below Error when i do Plan -
Failed to load root config module: Error parsing /terraform/main.tf: key ${data.template_file.userdata_common.rendered}"' expected start of object ('{') or assignment ('=')
Is this possible using Terraform (0.9.3)?
If not, what's the best way to do this with Terraform?
Did you try template_cloudinit_config?
Add below codes.
data "template_cloudinit_config" "master" {
gzip = true
base64_encode = true
# get common user_data
part {
filename = "common.cfg"
content_type = "text/part-handler"
content = "${data.template_file.userdata_common.rendered}"
}
# get master user_data
part {
filename = "master.cfg"
content_type = "text/part-handler"
content = "${data.template_file.userdata_master.rendered}"
}
}
# sample code to use it.
resource "aws_instance" "web" {
ami = "ami-d05e75b8"
instance_type = "t2.micro"
user_data = "${data.template_cloudinit_config.master.rendered}"
}
Let me know if it works.
You can use "provisioner" to modify the infrastructure you are creating using the Terraform, Here is the example from them https://www.terraform.io/intro/getting-started/provision.html

Resources