I'm trying to execute the following command:
kubectl get cm aws-auth -n kube-system -o json | jq --arg add "`cat additional_roles_aws_auth.yaml`" '.data.mapRoles += $add' | kubectl apply -f -
as part of a local Terraform exeuction as follows:
locals {
kubeconfig = yamlencode({
apiVersion = "v1"
kind = "Config"
current-context = "terraform"
clusters = [{
name = module.eks.cluster_id
cluster = {
certificate-authority-data = module.eks.cluster_certificate_authority_data
server = module.eks.cluster_endpoint
}
}]
contexts = [{
name = "terraform"
context = {
cluster = module.eks.cluster_id
user = "terraform"
}
}]
users = [{
name = "terraform"
user = {
token = data.aws_eks_cluster_auth.this.token
}
}]
})
}
resource "null_resource" "apply" {
triggers = {
kubeconfig = base64encode(local.kubeconfig)
cmd_patch = <<-EOT
kubectl get cm aws-auth -n kube-system -o json | jq --arg add "`cat additional_roles_aws_auth.yaml`" '.data.mapRoles += $add' | kubectl apply -f -
EOT
}
provisioner "local-exec" {
interpreter = ["/bin/bash", "-c"]
environment = {
KUBECONFIG = self.triggers.kubeconfig
}
command = self.triggers.cmd_patch
}
}
Executing the same command outside of Terraform, plainly on the command line works fine.
However, I always get the following error when executing as part of the Terraform script:
│ ': exit status 1. Output:
│ iAic2FtcGxlLWNsdXN0ZXI...WaU5ERXdNekEiCg==":
│ open
│ ImFwaVZlcnNpb24iOiAidjEiy...RXdNekEiCg==:
│ file name too long
Anybody any ideas what the issues could be ?
As per my comment: the KUBECONFIG environment variable needs to be a list of configuration files and not the content of the file itself [1]:
The KUBECONFIG environment variable is a list of paths to configuration files.
The original problem was that the content of the file was encoded in base64 format [2] and used in that format without decoding it before. Thankfully, Terraform has both functions built-in, so using base64decode [3] would return the "normal" file content. Still, it would be the file content and not path to the config file. Based on the other comments, I guess the important thing to note is the additional_roles_aws_auth.yaml file has to be in the same directory as the root module. As the command is a bit more complicated, I am not sure if you could use Terraform built-in path object [4] to make sure the file is searched for in the root of the module:
kubectl get cm aws-auth -n kube-system -o json | jq --arg add "`cat ${path.root}/additional_roles_aws_auth.yaml`" '.data.mapRoles += $add' | kubectl apply -f -
[1] https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#set-the-kubeconfig-environment-variable
[2] https://www.terraform.io/language/functions/base64encode
[3] https://www.terraform.io/language/functions/base64decode
[4] https://www.terraform.io/language/expressions/references#filesystem-and-workspace-info
The base64 encoded kubeconfig is called in your command so you must decode it:
kubectl <YOUR_COMMAND> --kubeconfig <(echo $KUBECONFIG | base64 --decode)
Related
I tried to use terraform without any Cloud instance - only for local install cloudflared tunnel using construction:
resource "null_resource" "tunell_install" {
triggers = {
always_run = timestamp()
}
provisioner "local-exec" {
command = "/home/uzer/script/tunnel.sh"
}
}
instead something like:
provider "google" {
project = var.gcp_project_id
}
but after running
$ terraform apply -auto-approve
successfully created /etc/cloudflared/cert.json with content:
{
"AccountTag" : "${account}",
"TunnelID" : "${tunnel_id}",
"TunnelName" : "${tunnel_name}",
"TunnelSecret" : "${secret}"
}
but as I undestood here must be values instead variables? It's seems that metadata_startup_script from instance.tf only applied to Google instance. How it's possible to change it for using terraform with install CF tunnel locally and running tunnel? Maybe also need to use templatefile but in other .tf file? The curent code block metadata_startup_script:
// This is where we configure the server (aka instance). Variables like web_zone take a terraform variable and provide it to the server so that it can use them as a local variable
metadata_startup_script = templatefile("./server.tpl",
{
web_zone = var.cloudflare_zone,
account = var.cloudflare_account_id,
tunnel_id = cloudflare_argo_tunnel.auto_tunnel.id,
tunnel_name = cloudflare_argo_tunnel.auto_tunnel.name,
secret = random_id.tunnel_secret.b64_std
})
Content of server.tpl file:
# Script to install Cloudflare Tunnel
# cloudflared configuration
cd
# The package for this OS is retrieved
wget https://bin.equinox.io/c/VdrWdbjqyF/cloudflared-stable-linux-amd64.deb
sudo dpkg -i cloudflared-stable-linux-amd64.deb
# A local user directory is first created before we can install the tunnel as a system service
mkdir ~/.cloudflared
touch ~/.cloudflared/cert.json
touch ~/.cloudflared/config.yml
# Another herefile is used to dynamically populate the JSON credentials file
cat > ~/.cloudflared/cert.json << "EOF"
{
"AccountTag" : "${account}",
"TunnelID" : "${tunnel_id}",
"TunnelName" : "${tunnel_name}",
"TunnelSecret" : "${secret}"
}
EOF
# Same concept with the Ingress Rules the tunnel will use
cat > ~/.cloudflared/config.yml << "EOF"
tunnel: ${tunnel_id}
credentials-file: /etc/cloudflared/cert.json
logfile: /var/log/cloudflared.log
loglevel: info
ingress:
- hostname: ssh.${web_zone}
service: ssh://localhost:22
- hostname: "*"
service: hello-world
EOF
# Now we install the tunnel as a systemd service
sudo cloudflared service install
# The credentials file does not get copied over so we'll do that manually
sudo cp -via ~/.cloudflared/cert.json /etc/cloudflared/
# Now we can start the tunnel
sudo service cloudflared start
In argo.tf exist this code:
data "template_file" "init" {
template = file("server.tpl")
vars = {
web_zone = var.cloudflare_zone,
account = var.cloudflare_account_id,
tunnel_id = cloudflare_argo_tunnel.auto_tunnel.id,
tunnel_name = cloudflare_argo_tunnel.auto_tunnel.name,
secret = random_id.tunnel_secret.b64_std
}
}
If you are asking about how to create the file locally and populate the values, here is an example:
resource "local_file" "cloudflare_tunnel_script" {
content = templatefile("${path.module}/server.tpl",
{
web_zone = "webzone"
account = "account"
tunnel_id = "id"
tunnel_name = "name"
secret = "secret"
}
)
filename = "${path.module}/server.sh"
}
For this to work, you would have to assign the real values for all the template variables listed above. From what I see, there are already examples of how to use variables for those values. In other words, instead of hardcoding the values for template variables you could use standard variables:
resource "local_file" "cloudflare_tunnel_script" {
content = templatefile("${path.module}/server.tpl",
{
web_zone = var.cloudflare_zone
account = var.cloudflare_account_id
tunnel_id = cloudflare_argo_tunnel.auto_tunnel.id
tunnel_name = cloudflare_argo_tunnel.auto_tunnel.name
secret = random_id.tunnel_secret.b64_std
}
)
filename = "${path.module}/server.sh"
}
This code will populate all the values and create a server.sh script in the same directory you are running the Terraform code from.
You could complement this code with the null_resource you wanted:
resource "null_resource" "tunnel_install" {
depends_on = [
local_file.cloudflare_tunnel_script,
]
triggers = {
always_run = timestamp()
}
provisioner "local-exec" {
command = "${path.module}/server.sh"
}
}
resource "google_service_account" "myaccount" {
account_id = "dev-foo-account"
}
resource "google_service_account_key" "mykey" {
service_account_id = google_service_account.myaccount.name
}
data "google_service_account_key" "mykey" {
name = google_service_account_key.mykey.name
public_key_type = "TYPE_X509_PEM_FILE"
}
If I create a Service Account and a key like this - how do I obtain the key afterwards?
terraform output yields:
$ terraform output -json google_service_account_key
The output variable requested could not be found in the state
file. If you recently added this to your configuration, be
sure to run `terraform apply`, since the state won't be updated
with new output variables until that command is run.
You have to put that variable as an output if you want to use it after apply the plan:
output "my_private_key" {
value = data.google_service_account_key.mykey.private_key
}
To output the value of "my_private_key":
$ terraform output my_private_key
To obtain the credentials as a JSON which can later be used for authentication:
$ terraform output -raw key | base64 -d -
I have a terraform script that keeps failing because I think it tries to calculate the hash of a zip file too early, before the file is actually created.
These are the relevant sections:
data "external" "my_application_layer" {
program = [
"../build/utils/package.sh",
"../packages/sites/my/application/layer/",
"my-application-layer.zip"
]
}
and
resource "aws_lambda_layer_version" "my_application" {
filename = "${path.module}/../packages/sites/my/application/my_application_layer.zip"
layer_name = "${var.resource_name_prefix}-my-application"
source_code_hash = filebase64sha256("${path.module}/../packages/sites/my/application/my-application-layer.zip")
compatible_runtimes = [ "nodejs12.x" ]
depends_on = [
data.external.my_application_layer
]
}
what am I missing?
the actual error message is:
Error: Error in function call
on my-application-lambda.tf line 50, in resource "aws_lambda_layer_version" "my_application":
50: source_code_hash = filebase64sha256("${path.module}/../packages/sites/my/application/my-application-layer.zip")
|----------------
| path.module is "."
Call to function "filebase64sha256" failed: no file exists at
../packages/sites/my/application/my-application-layer.zip; this function works
only with files that are distributed as part of the configuration source code,
so if this file will be created by a resource in this configuration you must
instead obtain this result from an attribute of that resource.
Functions do not participate in the dependency graph, so the depends_on technique won't work here.
Here's one way to do what you need, with the archive_file data source zipping up the folder for you:
data "archive_file" "lambda_zip" {
type = "zip"
source_dir = "source"
output_path = "lambda.zip"
}
resource "aws_lambda_function" "my_lambda" {
filename = "lambda.zip"
source_code_hash = "${data.archive_file.lambda_zip.output_base64sha256}"
function_name = "my_lambda"
role = "${aws_iam_role.lambda.arn}"
description = "Some AWS lambda"
handler = "index.handler"
runtime = "nodejs4.3"
}
Give your external data resource an output and reference it from the lambda layer so it has to wait until the package.sh script has finished.
package.sh
#!/bin/bash
SRC=$1
FILENAME=$2
cd $SRC
zip -r -X ../$FILENAME * %1>/dev/null %2>/dev/null
echo "{ \"hash\": \"$(cat "$TARGET" | shasum -a 256 | cut -d " " -f 1 | xxd -r -p | base64)\", \"md5\": \"$(cat "$TARGET" | md5)\" }"
Then reference the output from your layer
source_code_hash = "${data.external.my_application_layer.result.md5}"
In your external.my_application_layer you are creating
my_application_layer.zip
but then you are trying to use (wrong name):
my-application-layer.zip
Is there a way to use local-exec to generate an output for a variable inside of Terraform .tf file?
data-external feature of Terraform has helped me
cat owner.sh
jq -n --arg username $(git config user.name) '{"username": $username}'
The config part which must be added on instance_create.tf files;
data "external" "owner_tag_generator" {
program = ["bash", "/full/path/of/owner.sh"]
}
output "owner" {
value = "${data.external.owner_tag_generator.result}"
}
tags {
...
CreatorName = "${data.external.owner_tag_generator.result.username}"
...
}
I have a bash script that will return a single AMI ID. I want to use that AMI ID returned from the bash script as an input for my launch configuration.
data "external" "amiid" {
program = ["bash", "${path.root}/scripts/getamiid.sh"]
}
resource "aws_launch_configuration" "bastion-lc" {
name_prefix = "${var.lc_name}-"
image_id = "${data.external.amiid.result}"
instance_type = "${var.instance_type}"
placement_tenancy = "default"
associate_public_ip_address = false
security_groups = ["${var.bastion_sg_id}"]
iam_instance_profile = "${aws_iam_instance_profile.bastion-profile.arn}"
lifecycle {
create_before_destroy = true
}
}
When I run this with terraform plan I get an error saying
* module.bastion.data.external.amiid: 1 error(s) occurred:
* module.bastion.data.external.amiid: data.external.amiid: command "bash" produced invalid JSON: invalid character 'a' looking for beginning of object key string
Here's the getamiid.sh script:
#!/bin/bash
amiid=$(curl -s "https://someurl" | jq -r 'map(select(.tags.osVersion | startswith("os"))) | max_by(.tags.creationDate) | .id')
echo -n "{ami_id:\"${amiid}\"}"
when running the script it returns:
{ami_id:"ami-xxxyyyzzz"}
Got it working with:
#!/bin/bash
amiid=$(curl -s "someurl" | jq -r 'map(select(.tags.osVersion | startswith("someos"))) | max_by(.tags.creationDate) | .id')
echo -n "{\"ami_id\":\"${amiid}\"}"
which returns
{"ami_id":"ami-xxxyyyzzz"}
Then in the terraform resource, we call it by:
image_id = "${element(split(",", data.external.amiid.result["ami_id"]), count.index)}"