How to use external data source in terraform with bash script - terraform

I have a bash script that will return a single AMI ID. I want to use that AMI ID returned from the bash script as an input for my launch configuration.
data "external" "amiid" {
program = ["bash", "${path.root}/scripts/getamiid.sh"]
}
resource "aws_launch_configuration" "bastion-lc" {
name_prefix = "${var.lc_name}-"
image_id = "${data.external.amiid.result}"
instance_type = "${var.instance_type}"
placement_tenancy = "default"
associate_public_ip_address = false
security_groups = ["${var.bastion_sg_id}"]
iam_instance_profile = "${aws_iam_instance_profile.bastion-profile.arn}"
lifecycle {
create_before_destroy = true
}
}
When I run this with terraform plan I get an error saying
* module.bastion.data.external.amiid: 1 error(s) occurred:
* module.bastion.data.external.amiid: data.external.amiid: command "bash" produced invalid JSON: invalid character 'a' looking for beginning of object key string
Here's the getamiid.sh script:
#!/bin/bash
amiid=$(curl -s "https://someurl" | jq -r 'map(select(.tags.osVersion | startswith("os"))) | max_by(.tags.creationDate) | .id')
echo -n "{ami_id:\"${amiid}\"}"
when running the script it returns:
{ami_id:"ami-xxxyyyzzz"}

Got it working with:
#!/bin/bash
amiid=$(curl -s "someurl" | jq -r 'map(select(.tags.osVersion | startswith("someos"))) | max_by(.tags.creationDate) | .id')
echo -n "{\"ami_id\":\"${amiid}\"}"
which returns
{"ami_id":"ami-xxxyyyzzz"}
Then in the terraform resource, we call it by:
image_id = "${element(split(",", data.external.amiid.result["ami_id"]), count.index)}"

Related

Terraform "file name too long" when executing with "null_resource" "apply"

I'm trying to execute the following command:
kubectl get cm aws-auth -n kube-system -o json | jq --arg add "`cat additional_roles_aws_auth.yaml`" '.data.mapRoles += $add' | kubectl apply -f -
as part of a local Terraform exeuction as follows:
locals {
kubeconfig = yamlencode({
apiVersion = "v1"
kind = "Config"
current-context = "terraform"
clusters = [{
name = module.eks.cluster_id
cluster = {
certificate-authority-data = module.eks.cluster_certificate_authority_data
server = module.eks.cluster_endpoint
}
}]
contexts = [{
name = "terraform"
context = {
cluster = module.eks.cluster_id
user = "terraform"
}
}]
users = [{
name = "terraform"
user = {
token = data.aws_eks_cluster_auth.this.token
}
}]
})
}
resource "null_resource" "apply" {
triggers = {
kubeconfig = base64encode(local.kubeconfig)
cmd_patch = <<-EOT
kubectl get cm aws-auth -n kube-system -o json | jq --arg add "`cat additional_roles_aws_auth.yaml`" '.data.mapRoles += $add' | kubectl apply -f -
EOT
}
provisioner "local-exec" {
interpreter = ["/bin/bash", "-c"]
environment = {
KUBECONFIG = self.triggers.kubeconfig
}
command = self.triggers.cmd_patch
}
}
Executing the same command outside of Terraform, plainly on the command line works fine.
However, I always get the following error when executing as part of the Terraform script:
│ ': exit status 1. Output:
│ iAic2FtcGxlLWNsdXN0ZXI...WaU5ERXdNekEiCg==":
│ open
│ ImFwaVZlcnNpb24iOiAidjEiy...RXdNekEiCg==:
│ file name too long
Anybody any ideas what the issues could be ?
As per my comment: the KUBECONFIG environment variable needs to be a list of configuration files and not the content of the file itself [1]:
The KUBECONFIG environment variable is a list of paths to configuration files.
The original problem was that the content of the file was encoded in base64 format [2] and used in that format without decoding it before. Thankfully, Terraform has both functions built-in, so using base64decode [3] would return the "normal" file content. Still, it would be the file content and not path to the config file. Based on the other comments, I guess the important thing to note is the additional_roles_aws_auth.yaml file has to be in the same directory as the root module. As the command is a bit more complicated, I am not sure if you could use Terraform built-in path object [4] to make sure the file is searched for in the root of the module:
kubectl get cm aws-auth -n kube-system -o json | jq --arg add "`cat ${path.root}/additional_roles_aws_auth.yaml`" '.data.mapRoles += $add' | kubectl apply -f -
[1] https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#set-the-kubeconfig-environment-variable
[2] https://www.terraform.io/language/functions/base64encode
[3] https://www.terraform.io/language/functions/base64decode
[4] https://www.terraform.io/language/expressions/references#filesystem-and-workspace-info
The base64 encoded kubeconfig is called in your command so you must decode it:
kubectl <YOUR_COMMAND> --kubeconfig <(echo $KUBECONFIG | base64 --decode)

Terraform sha calculation happens before file created

I have a terraform script that keeps failing because I think it tries to calculate the hash of a zip file too early, before the file is actually created.
These are the relevant sections:
data "external" "my_application_layer" {
program = [
"../build/utils/package.sh",
"../packages/sites/my/application/layer/",
"my-application-layer.zip"
]
}
and
resource "aws_lambda_layer_version" "my_application" {
filename = "${path.module}/../packages/sites/my/application/my_application_layer.zip"
layer_name = "${var.resource_name_prefix}-my-application"
source_code_hash = filebase64sha256("${path.module}/../packages/sites/my/application/my-application-layer.zip")
compatible_runtimes = [ "nodejs12.x" ]
depends_on = [
data.external.my_application_layer
]
}
what am I missing?
the actual error message is:
Error: Error in function call
on my-application-lambda.tf line 50, in resource "aws_lambda_layer_version" "my_application":
50: source_code_hash = filebase64sha256("${path.module}/../packages/sites/my/application/my-application-layer.zip")
|----------------
| path.module is "."
Call to function "filebase64sha256" failed: no file exists at
../packages/sites/my/application/my-application-layer.zip; this function works
only with files that are distributed as part of the configuration source code,
so if this file will be created by a resource in this configuration you must
instead obtain this result from an attribute of that resource.
Functions do not participate in the dependency graph, so the depends_on technique won't work here.
Here's one way to do what you need, with the archive_file data source zipping up the folder for you:
data "archive_file" "lambda_zip" {
type = "zip"
source_dir = "source"
output_path = "lambda.zip"
}
resource "aws_lambda_function" "my_lambda" {
filename = "lambda.zip"
source_code_hash = "${data.archive_file.lambda_zip.output_base64sha256}"
function_name = "my_lambda"
role = "${aws_iam_role.lambda.arn}"
description = "Some AWS lambda"
handler = "index.handler"
runtime = "nodejs4.3"
}
Give your external data resource an output and reference it from the lambda layer so it has to wait until the package.sh script has finished.
package.sh
#!/bin/bash
SRC=$1
FILENAME=$2
cd $SRC
zip -r -X ../$FILENAME * %1>/dev/null %2>/dev/null
echo "{ \"hash\": \"$(cat "$TARGET" | shasum -a 256 | cut -d " " -f 1 | xxd -r -p | base64)\", \"md5\": \"$(cat "$TARGET" | md5)\" }"
Then reference the output from your layer
source_code_hash = "${data.external.my_application_layer.result.md5}"
In your external.my_application_layer you are creating
my_application_layer.zip
but then you are trying to use (wrong name):
my-application-layer.zip

How to change aws_instance with user_data value in terraform?

I have an aws_instance in a terraform file, and I want to tag this instance with a value within my user_data script.
How can I tag my instance with a value of LOGINTOKEN in the user_data script?
Example:
resource "aws_instance" "my_instance" {
ami = "some_ami"
instance_type = "some_instance"
//other configs
user_data = <<EOF
#!/bin/bash
LOGINTOKEN=$(echo { "token": "qwerty12345" } | docker run --rm -i stedolan/jq -r .token)
EOF
tags {
LoginToken = "$LOGINTOKEN"
}
}

how to get private IP of EC 2 dynamically and put it in /etc/hosts

I would like to create multiple EC2 instances using Terraform and write the private IP addresses of the instances to /etc/hosts on every instance.
Currently I am trying the following code but it's not working:
resource "aws_instance" "ceph-cluster" {
count = "${var.ceph_cluster_count}"
ami = "${var.app_ami}"
instance_type = "t2.small"
key_name = "${var.ssh_key_name}"
vpc_security_group_ids = [
"${var.vpc_ssh_sg_ids}",
"${aws_security_group.ceph.id}",
]
subnet_id = "${element(split(",", var.subnet_ids), count.index)}"
associate_public_ip_address = "true"
// TODO 一時的にIAM固定
//iam_instance_profile = "${aws_iam_instance_profile.app_instance_profile.name}"
iam_instance_profile = "${var.iam_role_name}"
root_block_device {
delete_on_termination = "true"
volume_size = "30"
volume_type = "gp2"
}
connection {
user = "ubuntu"
private_key = "${file("${var.ssh_key}")}"
agent = "false"
}
provisioner "file" {
source = "../../../scripts"
destination = "/home/ubuntu/"
}
tags {
Name = "${var.infra_name}-ceph-cluster-${count.index}"
InfraName = "${var.infra_name}"
}
provisioner "remote-exec" {
inline = [
"cat /etc/hosts",
"cat ~/scripts/ceph/ceph_rsa.pub >> ~/.ssh/authorized_keys",
"cp -arp ~/scripts/ceph/ceph_rsa ~/.ssh/ceph_rsa",
"chmod 700 ~/.ssh/ceph_rsa",
"echo 'IdentityFile ~/.ssh/ceph_rsa' >> ~/.ssh/config",
"echo 'User ubuntu' >> ~/.ssh/config",
"echo '${aws_instance.ceph-cluster.0.private_ip} node01 ceph01' >> /etc/hosts ",
"echo '${aws_instance.ceph-cluster.1.private_ip} node02 ceph02' >> /etc/hosts "
]
}
}
aws_instance.ceph-cluster. *. private_ip
I would like to get the result of the above command and put it in /etc/hosts.
I had a similar need for a database cluster (some sort of poor's man Consul alternative), I ended using the following Terraform file:
variable "cluster_member_count" {
description = "Number of members in the cluster"
default = "3"
}
variable "cluster_member_name_prefix" {
description = "Prefix to use when naming cluster members"
default = "cluster-node-"
}
variable "aws_keypair_privatekey_filepath" {
description = "Path to SSH private key to SSH-connect to instances"
default = "./secrets/aws.key"
}
# EC2 instances
resource "aws_instance" "cluster_member" {
count = "${var.cluster_member_count}"
# ...
}
# Bash command to populate /etc/hosts file on each instances
resource "null_resource" "provision_cluster_member_hosts_file" {
count = "${var.cluster_member_count}"
# Changes to any instance of the cluster requires re-provisioning
triggers {
cluster_instance_ids = "${join(",", aws_instance.cluster_member.*.id)}"
}
connection {
type = "ssh"
host = "${element(aws_instance.cluster_member.*.public_ip, count.index)}"
user = "ec2-user"
private_key = "${file(var.aws_keypair_privatekey_filepath)}"
}
provisioner "remote-exec" {
inline = [
# Adds all cluster members' IP addresses to /etc/hosts (on each member)
"echo '${join("\n", formatlist("%v", aws_instance.cluster_member.*.private_ip))}' | awk 'BEGIN{ print \"\\n\\n# Cluster members:\" }; { print $0 \" ${var.cluster_member_name_prefix}\" NR-1 }' | sudo tee -a /etc/hosts > /dev/null",
]
}
}
One rule is that each cluster member get named by the cluster_member_name_prefix Terraform variable followed by the count index (starting at 0): cluster-node-0, cluster-node-1, etc.
This will add the following lines to each "aws_instance.cluster_member" resource's /etc/hosts file (the same exact lines and in the same order for every member):
# Cluster members:
10.0.1.245 cluster-node-0
10.0.1.198 cluster-node-1
10.0.1.153 cluster-node-2
In my case, the null_resource that populates the /etc/hosts file was triggered by an EBS volume attachment, but a "${join(",", aws_instance.cluster_member.*.id)}" trigger should work just fine too.
Also, for local development, I added a local-exec provisioner to locally write down each IP in a cluster_ips.txt file:
resource "null_resource" "write_resource_cluster_member_ip_addresses" {
depends_on = ["aws_instance.cluster_member"]
provisioner "local-exec" {
command = "echo '${join("\n", formatlist("instance=%v ; private=%v ; public=%v", aws_instance.cluster_member.*.id, aws_instance.cluster_member.*.private_ip, aws_instance.cluster_member.*.public_ip))}' | awk '{print \"node=${var.cluster_member_name_prefix}\" NR-1 \" ; \" $0}' > \"${path.module}/cluster_ips.txt\""
# Outputs is:
# node=cluster-node-0 ; instance=i-03b1f460318c2a1c3 ; private=10.0.1.245 ; public=35.180.50.32
# node=cluster-node-1 ; instance=i-05606bc6be9639604 ; private=10.0.1.198 ; public=35.180.118.126
# node=cluster-node-2 ; instance=i-0931cbf386b89ca4e ; private=10.0.1.153 ; public=35.180.50.98
}
}
And, with the following shell command I can add them to my local /etc/hosts file:
awk -F'[;=]' '{ print $8 " " $2 " #" $4 }' cluster_ips.txt >> /etc/hosts
Example:
35.180.50.32 cluster-node-0 # i-03b1f460318c2a1c3
35.180.118.126 cluster-node-1 # i-05606bc6be9639604
35.180.50.98 cluster-node-2 # i-0931cbf386b89ca4e
Terraform provisioners expose a self syntax for getting data about the resource being created.
If you were just interested in the instance being created's private IP address you could use ${self.private_ip} to get at this.
Unfortunately if you need to get the IP addresses of multiple sub-resources (eg ones created by using the count meta attribute) then you will need to do this outside of the resource's provisioner using the null_resource provider.
The resource provider docs show a good use case for this:
resource "aws_instance" "cluster" {
count = 3
...
}
resource "null_resource" "cluster" {
# Changes to any instance of the cluster requires re-provisioning
triggers {
cluster_instance_ids = "${join(",", aws_instance.cluster.*.id)}"
}
# Bootstrap script can run on any instance of the cluster
# So we just choose the first in this case
connection {
host = "${element(aws_instance.cluster.*.public_ip, 0)}"
}
provisioner "remote-exec" {
# Bootstrap script called with private_ip of each node in the clutser
inline = [
"bootstrap-cluster.sh ${join(" ", aws_instance.cluster.*.private_ip)}",
]
}
}
but in your case you probably want something like:
resource "aws_instance" "ceph-cluster" {
...
}
resource "null_resource" "ceph-cluster" {
# Changes to any instance of the cluster requires re-provisioning
triggers {
cluster_instance_ids = "${join(",", aws_instance.ceph-cluster.*.id)}"
}
connection {
host = "${element(aws_instance.cluster.*.public_ip, count.index)}"
}
provisioner "remote-exec" {
inline = [
"cat /etc/hosts",
"cat ~/scripts/ceph/ceph_rsa.pub >> ~/.ssh/authorized_keys",
"cp -arp ~/scripts/ceph/ceph_rsa ~/.ssh/ceph_rsa",
"chmod 700 ~/.ssh/ceph_rsa",
"echo 'IdentityFile ~/.ssh/ceph_rsa' >> ~/.ssh/config",
"echo 'User ubuntu' >> ~/.ssh/config",
"echo '${aws_instance.ceph-cluster.0.private_ip} node01 ceph01' >> /etc/hosts ",
"echo '${aws_instance.ceph-cluster.1.private_ip} node02 ceph02' >> /etc/hosts "
]
}
}
This could be a piece of cake with Terrafrom/Sparrowform. No need in null_resources, with the minimum of fuss:
Bootstrap infrastructure
$ terrafrom apply
Prepare Sparrowform privision scenario to insert ALL nodes public ips / dns names into every node's /etc/hosts file
$ cat sparrowfile
#!/usr/bin/env perl6
use Sparrowform;
my #hosts = (
"127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4",
"::1 localhost localhost.localdomain localhost6 localhost6.localdomain6"
);
for tf-resources() -> $r {
my $rd = $r[1]; # resource data
next unless $rd<public_ip>;
next unless $rd<public_dns>;
next if $rd<public_ip> eq input_params('Host');
push #hosts, $rd<public_ip> ~ ' ' ~ $rd<public_dns>;
}
file '/etc/hosts', %(
action => 'create',
content => #hosts.join("\n")
);
Give it a run, Sparrowform will execute scenario on every node
$ sparrowform --bootstrap --ssh_private_key=~/.ssh/aws.key --ssh_user=ec2-user
PS. disclosure - I am the tool author

Running local-exec provisioner on all EC2 instances after creation

I currently have a Terraform file to create EC2 instances on AWS that looks like this:
resource "aws_instance" "influxdata" {
count = "${var.ec2-count-influx-data}"
ami = "${module.amis.rhel73_id}"
instance_type = "${var.ec2-type-influx-data}"
vpc_security_group_ids = ["${var.sg-ids}"]
subnet_id = "${element(module.infra.subnet,count.index)}"
key_name = "${var.KeyName}"
iam_instance_profile = "Custom-role"
tags {
Name = "influx-data-node"
ASV = "${module.infra.ASV}"
CMDBEnvironment = "${module.infra.CMDBEnvironment}"
OwnerContact = "${module.infra.OwnerContact}"
custodian_downtime = "off"
OwnerEid = "${var.OwnerEid}"
}
ebs_block_device {
device_name = "/dev/sdg"
volume_size = 500
volume_type = "io1"
iops = 2000
encrypted = true
delete_on_termination = true
}
user_data = "${file("terraform/attach_ebs.sh")}"
connection {
private_key = "${file("/Users/usr111/Downloads/usr111_CD.pem")}"
user = "ec2-user"
}
provisioner "remote-exec" {
inline = ["echo just checking for ssh. ttyl. bye."]
}
provisioner "local-exec" {
command = <<EOF
ansible-playbook base-data.yml --key-file=/Users/usr111/Downloads/usr111_CD.pem --user=ec2-user -b -i "${self.private_ip},"
EOF
}
}
resource "aws_route53_record" "influx-data-route" {
count = "${var.ec2-count-influx-data}"
zone_id = "${var.r53-zone}"
name = "influx-data-0${count.index}"
type = "A"
ttl = "300"
// matches up record N to instance N
records = ["${element(aws_instance.influxdata.*.private_ip, count.index)}"]
}
resource "local_file" "inventory-meta" {
filename = "inventory"
content = <<-EOF
[meta]
${join("\n",aws_instance.influxmeta.*.private_ip)}
[data]
${join("\n",aws_instance.influxdata.*.private_ip)}
EOF
}
What I'm struggling to figure out is to get this part to run after I create the inventory file:
provisioner "local-exec" {
command = <<EOF
ansible-playbook base-data.yml --key-file=/Users/usr111/Downloads/usr111_CD.pem --user=ec2-user -b -i "${self.private_ip},"
EOF
}
Right now I'm passing an IP into Ansible but I want to pass in the inventory file, which is only created after Terraform provisions all of the instances.
since you are using AWS maybe you could try using the Dynamic Inventory script and your provisioner could look like this:
provisioner "local-exec" {
command = "ansible-playbook -i ec2.py playbook.yml --limit ${self.public_ip}" }
In your playbook you are going to need to wait for SSH to become available since Ansible is making the connection and not Terraform.
- name: wait for ssh
hosts: localhost
gather_facts: no
tasks:
- local_action: wait_for port=22 host="{{ ip }}" search_regex=OpenSSH delay=10
So the command should look like this:
provisioner "local-exec" {
command = "ansible-playbook -i ec2.py playbook.yml --limit ${self.public_ip}" --extra-vars 'ip=${self.public_ip}'}
You can also copy your playbooks to the host with the "File Provisioner", install ansible and run the playbook locally with "remote-exec", but that's up to you.

Resources