I have a terraform script that keeps failing because I think it tries to calculate the hash of a zip file too early, before the file is actually created.
These are the relevant sections:
data "external" "my_application_layer" {
program = [
"../build/utils/package.sh",
"../packages/sites/my/application/layer/",
"my-application-layer.zip"
]
}
and
resource "aws_lambda_layer_version" "my_application" {
filename = "${path.module}/../packages/sites/my/application/my_application_layer.zip"
layer_name = "${var.resource_name_prefix}-my-application"
source_code_hash = filebase64sha256("${path.module}/../packages/sites/my/application/my-application-layer.zip")
compatible_runtimes = [ "nodejs12.x" ]
depends_on = [
data.external.my_application_layer
]
}
what am I missing?
the actual error message is:
Error: Error in function call
on my-application-lambda.tf line 50, in resource "aws_lambda_layer_version" "my_application":
50: source_code_hash = filebase64sha256("${path.module}/../packages/sites/my/application/my-application-layer.zip")
|----------------
| path.module is "."
Call to function "filebase64sha256" failed: no file exists at
../packages/sites/my/application/my-application-layer.zip; this function works
only with files that are distributed as part of the configuration source code,
so if this file will be created by a resource in this configuration you must
instead obtain this result from an attribute of that resource.
Functions do not participate in the dependency graph, so the depends_on technique won't work here.
Here's one way to do what you need, with the archive_file data source zipping up the folder for you:
data "archive_file" "lambda_zip" {
type = "zip"
source_dir = "source"
output_path = "lambda.zip"
}
resource "aws_lambda_function" "my_lambda" {
filename = "lambda.zip"
source_code_hash = "${data.archive_file.lambda_zip.output_base64sha256}"
function_name = "my_lambda"
role = "${aws_iam_role.lambda.arn}"
description = "Some AWS lambda"
handler = "index.handler"
runtime = "nodejs4.3"
}
Give your external data resource an output and reference it from the lambda layer so it has to wait until the package.sh script has finished.
package.sh
#!/bin/bash
SRC=$1
FILENAME=$2
cd $SRC
zip -r -X ../$FILENAME * %1>/dev/null %2>/dev/null
echo "{ \"hash\": \"$(cat "$TARGET" | shasum -a 256 | cut -d " " -f 1 | xxd -r -p | base64)\", \"md5\": \"$(cat "$TARGET" | md5)\" }"
Then reference the output from your layer
source_code_hash = "${data.external.my_application_layer.result.md5}"
In your external.my_application_layer you are creating
my_application_layer.zip
but then you are trying to use (wrong name):
my-application-layer.zip
Related
I'm trying to execute the following command:
kubectl get cm aws-auth -n kube-system -o json | jq --arg add "`cat additional_roles_aws_auth.yaml`" '.data.mapRoles += $add' | kubectl apply -f -
as part of a local Terraform exeuction as follows:
locals {
kubeconfig = yamlencode({
apiVersion = "v1"
kind = "Config"
current-context = "terraform"
clusters = [{
name = module.eks.cluster_id
cluster = {
certificate-authority-data = module.eks.cluster_certificate_authority_data
server = module.eks.cluster_endpoint
}
}]
contexts = [{
name = "terraform"
context = {
cluster = module.eks.cluster_id
user = "terraform"
}
}]
users = [{
name = "terraform"
user = {
token = data.aws_eks_cluster_auth.this.token
}
}]
})
}
resource "null_resource" "apply" {
triggers = {
kubeconfig = base64encode(local.kubeconfig)
cmd_patch = <<-EOT
kubectl get cm aws-auth -n kube-system -o json | jq --arg add "`cat additional_roles_aws_auth.yaml`" '.data.mapRoles += $add' | kubectl apply -f -
EOT
}
provisioner "local-exec" {
interpreter = ["/bin/bash", "-c"]
environment = {
KUBECONFIG = self.triggers.kubeconfig
}
command = self.triggers.cmd_patch
}
}
Executing the same command outside of Terraform, plainly on the command line works fine.
However, I always get the following error when executing as part of the Terraform script:
│ ': exit status 1. Output:
│ iAic2FtcGxlLWNsdXN0ZXI...WaU5ERXdNekEiCg==":
│ open
│ ImFwaVZlcnNpb24iOiAidjEiy...RXdNekEiCg==:
│ file name too long
Anybody any ideas what the issues could be ?
As per my comment: the KUBECONFIG environment variable needs to be a list of configuration files and not the content of the file itself [1]:
The KUBECONFIG environment variable is a list of paths to configuration files.
The original problem was that the content of the file was encoded in base64 format [2] and used in that format without decoding it before. Thankfully, Terraform has both functions built-in, so using base64decode [3] would return the "normal" file content. Still, it would be the file content and not path to the config file. Based on the other comments, I guess the important thing to note is the additional_roles_aws_auth.yaml file has to be in the same directory as the root module. As the command is a bit more complicated, I am not sure if you could use Terraform built-in path object [4] to make sure the file is searched for in the root of the module:
kubectl get cm aws-auth -n kube-system -o json | jq --arg add "`cat ${path.root}/additional_roles_aws_auth.yaml`" '.data.mapRoles += $add' | kubectl apply -f -
[1] https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#set-the-kubeconfig-environment-variable
[2] https://www.terraform.io/language/functions/base64encode
[3] https://www.terraform.io/language/functions/base64decode
[4] https://www.terraform.io/language/expressions/references#filesystem-and-workspace-info
The base64 encoded kubeconfig is called in your command so you must decode it:
kubectl <YOUR_COMMAND> --kubeconfig <(echo $KUBECONFIG | base64 --decode)
Is there a way to use local-exec to generate an output for a variable inside of Terraform .tf file?
data-external feature of Terraform has helped me
cat owner.sh
jq -n --arg username $(git config user.name) '{"username": $username}'
The config part which must be added on instance_create.tf files;
data "external" "owner_tag_generator" {
program = ["bash", "/full/path/of/owner.sh"]
}
output "owner" {
value = "${data.external.owner_tag_generator.result}"
}
tags {
...
CreatorName = "${data.external.owner_tag_generator.result.username}"
...
}
I have a bash script that will return a single AMI ID. I want to use that AMI ID returned from the bash script as an input for my launch configuration.
data "external" "amiid" {
program = ["bash", "${path.root}/scripts/getamiid.sh"]
}
resource "aws_launch_configuration" "bastion-lc" {
name_prefix = "${var.lc_name}-"
image_id = "${data.external.amiid.result}"
instance_type = "${var.instance_type}"
placement_tenancy = "default"
associate_public_ip_address = false
security_groups = ["${var.bastion_sg_id}"]
iam_instance_profile = "${aws_iam_instance_profile.bastion-profile.arn}"
lifecycle {
create_before_destroy = true
}
}
When I run this with terraform plan I get an error saying
* module.bastion.data.external.amiid: 1 error(s) occurred:
* module.bastion.data.external.amiid: data.external.amiid: command "bash" produced invalid JSON: invalid character 'a' looking for beginning of object key string
Here's the getamiid.sh script:
#!/bin/bash
amiid=$(curl -s "https://someurl" | jq -r 'map(select(.tags.osVersion | startswith("os"))) | max_by(.tags.creationDate) | .id')
echo -n "{ami_id:\"${amiid}\"}"
when running the script it returns:
{ami_id:"ami-xxxyyyzzz"}
Got it working with:
#!/bin/bash
amiid=$(curl -s "someurl" | jq -r 'map(select(.tags.osVersion | startswith("someos"))) | max_by(.tags.creationDate) | .id')
echo -n "{\"ami_id\":\"${amiid}\"}"
which returns
{"ami_id":"ami-xxxyyyzzz"}
Then in the terraform resource, we call it by:
image_id = "${element(split(",", data.external.amiid.result["ami_id"]), count.index)}"
I have this code in terraform:
data "archive_file" "lambdazip" {
type = "zip"
output_path = "lambda_launcher.zip"
source_dir = "lambda/etc"
source_dir = "lambda/node_modules"
source {
content = "${data.template_file.config_json.rendered}"
filename = "config.json"
}
}
I get the following errors when I do terraform plan:
* data.archive_file.lambdazip: "source": conflicts with source_dir
("lambda/node_modules")
* data.archive_file.lambdazip: "source_content_filename": conflicts
with source_dir ("lambda/node_modules")
* data.archive_file.lambdazip: "source_dir": conflicts with
source_content_filename ("/home/user1/experiments/grascenote-
poc/init.tpl")
I am using terraform version v0.9.11
#Ram is correct. You cannot use both source_dir and source arguments in the same archive_file data source.
config_json.tpl
{"test": "${override}"}
Terraform
Terraform 0.12 and higher
Use templatefile()
main.tf
# create the template file config_json separately from the archive_file block
resource "local_file" "config" {
content = templatefile("${path.module}/config_json.tpl", {
override = "my value"
})
filename = "${path.module}/lambda/etc/config.json"
}
Terraform 0.11 and below
Use template provider.
main.tf
data "template_file" "config_json" {
template = "${file("${path.module}/config_json.tpl")}"
vars = {
override = "my value"
}
}
# create the template file config_json separately from the archive_file block
resource "local_file" "config" {
content = "${data.template_file.config_json.rendered}"
filename = "${path.module}/lambda/etc/config.json"
}
Next steps
Add to main.tf
# now you can grab the entire lambda source directory or specific subdirectories
data "archive_file" "lambdazip" {
type = "zip"
output_path = "lambda_launcher.zip"
source_dir = "${path.module}/lambda/"
depends_on = [
local_file.config,
]
}
Terraform run
$ terraform init
$ terraform apply
data.template_file.config_json: Refreshing state...
data.archive_file.lambdazip: Refreshing state...
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
+ local_file.config
id: <computed>
content: "{\"test\": \"my value\"}\n"
filename: "/Users/user/lambda/config.json"
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
local_file.config: Creating...
content: "" => "{\"test\": \"my value\"}\n"
filename: "" => "/Users/user/lambda/config.json"
local_file.config: Creation complete after 0s (ID: 05894e86414856969d915db57e21008563dfcc38)
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
List the contents of the new zip file
$ unzip -l lambda_launcher.zip
Archive: lambda_launcher.zip
Length Date Time Name
--------- ---------- ----- ----
21 01-01-2049 00:00 etc/config.json
22 01-01-2049 00:00 node_modules/index.js
--------- -------
43 2 files
In case of node.js lambda function,
You need to use "resource.local_file" with "depends_on".
And separate "rendering file" with "directory".
First, Put static directory(etc, node_modules) into "lambda" folder without rendering files.
Second, Put rendering files into any other path.
data "template_file" "config_json" {
template = "${file("${path.module}/config_json.tpl")}"
vars = {
foo = "bar"
}
}
resource "local_file" "config_json" {
content = "${data.template_file.config_json.rendered}"
filename = "${path.module}/lambda/config.json"
}
data "archive_file" "lambda_zip" {
type = "zip"
output_path = "${path.module}/lambda_function.zip"
source_dir = "${path.module}/lambda"
# It is important to this process.
depends_on = [
"local_file.config_json"
]
}
resource "aws_lambda_function" "lambda" {
filename = "${path.module}/lambda_function.zip"
function_name = "lambda_function"
role = "${aws_iam_role.lambda.arn}"
handler = "index.handler"
runtime = "nodejs10.x"
}
resource "aws_iam_role" "lambda" {
...
I try to implement aws lambda function using terraform.
I simply have null_resource that have local provisioner and resource.archive_file that zips source code after all preparation is done.
resource "null_resource" "deps" {
triggers = {
package_json = "${base64sha256(file("${path.module}/src/package.json"))}"
}
provisioner "local-exec" {
command = "cd ${path.module}/src && npm install"
}
}
resource "archive_file" "function" {
type = "zip"
source_dir = "${path.module}/src"
output_path = "${path.module}/function.zip"
depends_on = [ "null_resource.deps" ]
}
Recent changes to Terraform deprecated resource.archive_file, so data.archive_file should be used instead. Unfortunately, data executes before resources, and so local provisioner from dependent resource is called way after zip is created. So code bellow does not produce warning any more, however not working at all.
resource "null_resource" "deps" {
triggers = {
package_json = "${base64sha256(file("${path.module}/src/package.json"))}"
}
provisioner "local-exec" {
command = "cd ${path.module}/src && npm install"
}
}
data "archive_file" "function" {
type = "zip"
source_dir = "${path.module}/src"
output_path = "${path.module}/function.zip"
depends_on = [ "null_resource.deps" ]
}
Am I missing something? What is correct way to do this with recent versions.
Terraform: v0.7.11
OS: Win10
Turns out there is an issue with the way Terraform core handles depends_on for data resources. There are a couple of issues reported, one in the archive provider and another in the core.
The following workaround is listed in the archive provider issue. Note that it uses a data.null_data_source to sit between the null_resource and data.archive_file which makes it an explicit dependency as opposed to an implicit dependency with depends_on.
resource "null_resource" "lambda_exporter" {
# (some local-exec provisioner blocks, presumably...)
triggers = {
index = "${base64sha256(file("${path.module}/lambda-files/index.js"))}"
}
}
data "null_data_source" "wait_for_lambda_exporter" {
inputs = {
# This ensures that this data resource will not be evaluated until
# after the null_resource has been created.
lambda_exporter_id = "${null_resource.lambda_exporter.id}"
# This value gives us something to implicitly depend on
# in the archive_file below.
source_dir = "${path.module}/lambda-files/"
}
}
data "archive_file" "lambda_exporter" {
output_path = "${path.module}/lambda-files.zip"
source_dir = "${data.null_data_source.wait_for_lambda_exporter.outputs["source_dir"]}"
type = "zip"
}
There is a new data source in Terraform 0.8, external that allows you to run external commands and extract output. See data.external
The data source should only be used for the retrieval of some depedency value, not the execution of the npm install, you should still do that via the null_resource. Since this is a Terraform data source, it should not have any side effects (although you may need some in this case, not sure).
So basically, null_resource does the dependencies, data.external grabs some value that you can depend on for the archive (directory path for example), then data.archive_file performs the archiving.
This would probably work best with a pseudo random directory name potentially to make dirty checks work a little cleaner.
Here is another working example that doesn't use the 'null_data_source' solution. This work around combines everything into the null_resource block as I was still facing issues with the provided solution.
resource "null_resource" "dependancies" {
provisioner "local-exec" {
command = <<EOT
mkdir script-files
cp import.py script-files
pip3 install requests --target script-files
cd script-files
chmod -R 644 $(find . -type f)
chmod -R 755 $(find . -type d)
zip -r ../lambda_function.zip *
EOT
working_dir = path.module
}
triggers = {
always_run = "${timestamp()}"
}
}