Get most recent/newest image from ECR using terraform data source - terraform

I have an ECR repository named workflow and in this repository, there is 5 image pushed using GitHub action.
Now I have a terraform workflow that will just use the image from ECR and using this ECR image builds the ECS container definition.
so now I want to fetch the latest image with the tag whatever it would be...
I tried the below thing
data "aws_ecr_repository" "example" {
name = "workflow"
}
and then
"image": "${data.aws_ecr_repository.example.repository_url}"
but here I only get the Url for the repo without a tag
so how can I pass here the latest or newest image with the tag?

As terraform is not capable for this thing and you want to use still terraform in you are workplace then you can use terraform as an external data source
resource "aws_ecs_task_definition" "snipe-main" {
container_definitions = <<TASK_DEFINITION
[
{
"image":"${data.aws_ecr_repository.example.repository_url}:${data.external.current_image.result["image_tag"]}"
}
]
TASK_DEFINITION
}
data "external" "current_image" {
program = ["bash", "./ecs-task.sh"]
}
output "get_new_tag" {
value = data.external.current_image.result["image_tag"]
}
cat ECS-task.sh
#!/bin/bash
set -e
imageTag=$(aws ecr describe-images --repository-name <<here your repo name>> --query 'sort_by(imageDetails,& imagePushedAt)[-1].imageTags[0]')
imageTag=`sed -e 's/^"//' -e 's/"$//' <<<"$imageTag"`
jq -n --arg imageTag "$imageTag" '{"image_tag":$imageTag}'
exit 0

I was looking for the same, look if this documentation suites you
https://registry.terraform.io/providers/hashicorp/aws/2.34.0/docs/data-sources/ecr_image
it includes a way to obtain the image:
data "aws_ecr_image" "service_image" {
repository_name = "my/service"
image_tag = "latest"
}
the problem of that is that "image_uri" isnt in the resource.
There is an open issue in Github about it:
https://github.com/hashicorp/terraform-provider-aws/pull/24526
Meanwhile you can use this format for the url:
"${var.aws_account_id}.dkr.ecr.${var.region}.amazonaws.com/${var.project_name}:${var.latest-Tag}"

Related

To get the repository id from an existing project of Azure DevOps in terraform code

I need to get the repository id of an existing project to work on that repo. There seems no other way than using Azure DevOps REST API.
I tried to utilize the REST API to get the repo id in my terraform code:
data "http" "example" {
url = "https://dev.azure.com/{organization}/{project}/_apis/git/repositories?api-version=6.0"
request_headers = {
"Authorization" = "Basic ${base64encode("PAT:${var.personal_access_token}")}"
}
}
output "repository_id" {
value = data.http.example.json.value[0].id
}
It yields error while I was running terraform plan:
Error: Unsupported attribute
line 29, in output "repository_id":
29: value = data.http.example.json.value[0].id
I tried also with jsondecode (jq is already installed):
resource "null_resource" "example" {
provisioner "local-exec" {
command = "curl -s -H 'Authorization: Bearer ${var.pat}' https://dev.azure.com/{organization}/{project}/_apis/git/repositories?api-version=6.0 | jq '.value[0].id'"
interpreter = ["bash", "-c"]
}
}
output "repo_id" {
value = "${jsondecode(null_resource.example.stdout).id}"
}
That did not work either!!
Azure DevOps REST API works fine, I just cannot fetch the value from the responce into terraform! What would be the right code or can it be done without using REST API!
Thank you!
To proper way to interact with external API and return its output to TF is through external data source. TF docs linked provide example of how to use and create such a data source.

Terraform aws_lambda_function Requires Docker Image In ECR

I have a module that creates all the infrastructure needed for a lambda including the ECR that stores the image:
resource "aws_ecr_repository" "image_storage" {
name = "${var.project}/${var.environment}/lambda"
image_tag_mutability = "MUTABLE"
image_scanning_configuration {
scan_on_push = true
}
}
resource "aws_lambda_function" "executable" {
function_name = var.function_name
image_uri = "${aws_ecr_repository.image_storage.repository_url}:latest"
package_type = "Image"
role = aws_iam_role.lambda.arn
}
The problem with this of course is that it fails because when aws_lambda_function runs the repository is there but the image is not: the image is uploaded using my CI/CD.
So this is a chicken egg problem. Terraform is supposed to only be used for infrastructure so I cannot/should not use it to upload an image (even a dummy one) but I cannot instantiate the infrastructure unless the image is uploaded in between repository and lambda creation steps.
The only solution I can think of is to create ECR separately from the lambda and then somehow link it as an existing aws resource in my lambda but that seems kind of clumsy.
Any suggestions?
I ended up using the following solution where a dummy image is uploaded as part resource creation.
resource "aws_ecr_repository" "listing" {
name = "myLambda"
image_tag_mutability = "MUTABLE"
image_scanning_configuration {
scan_on_push = true
}
provisioner "local-exec" {
command = <<-EOT
docker pull alpine
docker tag alpine dummy_container
docker push dummy_container
EOT
}
}
Building off #przemek-lach's answer plus #halloei's comment, I wanted to post a fully-working ECR repository that gets provisioned with a dummy image
data "aws_ecr_authorization_token" "token" {}
resource "aws_ecr_repository" "repository" {
name = "lambda-${local.name}-${local.environment}"
image_tag_mutability = "MUTABLE"
tags = local.common_tags
image_scanning_configuration {
scan_on_push = true
}
lifecycle {
ignore_changes = all
}
provisioner "local-exec" {
# This is a 1-time execution to put a dummy image into the ECR repo, so
# terraform provisioning works on the lambda function. Otherwise there is
# a chicken-egg scenario where the lambda can't be provisioned because no
# image exists in the ECR
command = <<EOF
docker login ${data.aws_ecr_authorization_token.token.proxy_endpoint} -u AWS -p ${data.aws_ecr_authorization_token.token.password}
docker pull alpine
docker tag alpine ${aws_ecr_repository.repository.repository_url}:SOME_TAG
docker push ${aws_ecr_repository.repository.repository_url}:SOME_TAG
EOF
}
}

Terrafrom | variable | input to bash script

I have veraibles in varibles.tf files in terraf from
locals {
deployment-type = lower(var.env)
I am making an instance that will join ecs cluster. I need to inject cluster name in the Template file from TF vars
resource "aws_instance" "ec2-instance" {......
user_data = "${data.template_file.user_data.rendered}"
data "template_file" "user_data" {
template = "${file("${path.module}/user_data.tpl")}"
}
Here is tpl file. I need to inject cluster name in this file from TF vars
# Update all packages
sudo yum update -y
sudo yum install -y ecs-init
sudo service docker start
sudo start ecs
echo "${var.env}"
#Adding cluster name in ecs config
#echo "${var.source_dest_check}"= >> /etc/ecs/ecs.config
cat /etc/ecs/ecs.config | grep "ECS_CLUSTER"```
Just off the top, I think you are not supposed to use ${var.env} in the template, but rather ${env}.
There are multiple parts to how this works. I'll have to assume you are missing one of these parts. I can provide a simple example.
From my own userdata definitions, some greatly-simplified samples:
variable "product" {
default = ""
}
data "template_file" "userdata" {
template = file("${path.module}/userdata.sh")
vars = {
product = var.product
}
}
From my terraform.tfvars file:
product = "radar"
Somewhere in the file userdata.sh, which is the template:
"product": "${product}",
The template is of course more complex than this. I'd suggest starting with a simple file and checking that it came out looking like you want, then build on that. Once you see the vars flowing through the chain as you want, it should be simple to build out a more complex bootstrap.

Terraform - Using Local Command Result as an Variable for tf file

Is there a way to use local-exec to generate an output for a variable inside of Terraform .tf file?
data-external feature of Terraform has helped me
cat owner.sh
jq -n --arg username $(git config user.name) '{"username": $username}'
The config part which must be added on instance_create.tf files;
data "external" "owner_tag_generator" {
program = ["bash", "/full/path/of/owner.sh"]
}
output "owner" {
value = "${data.external.owner_tag_generator.result}"
}
tags {
...
CreatorName = "${data.external.owner_tag_generator.result.username}"
...
}

Terraform null resource execution order

The problem:
I'm trying to build a Docker Swarm cluster on Digital Ocean, consisting of 3 "manager" nodes and however many worker nodes. The number of worker nodes isn't particularly relevant for this question. I'm trying to module-ize the Docker Swarm provisioning stuff, so its not specifically coupled to the digitalocean provider, but instead can receive a list of ip addresses to act against provisioning the cluster.
In order to provision the master nodes, the first node needs to be put into swarm mode, which generates a join key that the other master nodes will use to join the first one. "null_resource"s are being used to execute remote provisioners against the master nodes, however, I cannot figure out how dafuq to make sure the first master node completes doing its stuff ("docker swarm init ..."), before having another "null_resource" provisioner execute against the other master nodes that need to join the first one. They all run in parallel and predictably, it doesn't work.
Further, trying to figure out how to collect the first node's generated join-token and make it available to the other nodes. I've considered doing this with Consul, and storing the join token as a key, and getting that key on the other nodes - but this isn't ideal as... there are still issues with ensuring the Consul cluster is provisioned and ready (so kind of the same problem).
main.tf
variable "master_count" { default = 3 }
# master nodes
resource "digitalocean_droplet" "master_nodes" {
count = "${var.master_count}"
... etc, etc
}
module "docker_master" {
source = "./docker/master"
private_ip = "${digitalocean_droplet.master_nodes.*.ipv4_address_private}"
public_ip = "${digitalocean_droplet.master_nodes.*.ipv4_address}"
instances = "${var.master_count}"
}
docker/master/main.tf
variable "instances" {}
variable "private_ip" { type = "list" }
variable "public_ip" { type = "list" }
# Act only on the first item in the list of masters...
resource "null_resource" "swarm_master" {
count = 1
# Just to ensure this gets run every time
triggers {
version = "${timestamp()}"
}
connection {
...
host = "${element(var.public_ip, 0)}"
}
provisioner "remote-exec" {
inline = [<<EOF
... install docker, then ...
docker swarm init --advertise-addr ${element(var.private_ip, 0)}
MANAGER_JOIN_TOKEN=$(docker swarm join-token manager -q)
# need to do something with the join token, like make it available
# as an attribute for interpolation in the next "null_resource" block
EOF
]
}
}
# Act on the other 2 swarm master nodes (*not* the first one)
resource "null_resource" "other_swarm_masters" {
count = "${var.instances - 1}"
triggers {
version = "${timestamp()}"
}
# Host key slices the 3-element IP list and excludes the first one
connection {
...
host = "${element(slice(var.public_ip, 1, length(var.public_ip)), count.index)}"
}
provisioner "remote-exec" {
inline = [<<EOF
SWARM_MASTER_JOIN_TOKEN=$(consul kv get docker/swarm/manager/join_token)
docker swarm join --token ??? ${element(var.private_ip, 0)}:2377
EOF
]
}
##### THIS IS THE MEAT OF THE QUESTION ###
# How do I make this "null_resource" block not run until the other one has
# completed and generated the swarm token output? depends_on doesn't
# seem to do it :(
}
From reading through github issues, I get the feeling this isn't an uncommon problem... but its kicking my ass. Any suggestions appreciated!
#victor-m's comment is correct. If you use a null_resource and have the following trigger on any former's property, then they will execute in order.
resource "null_resource" "first" {
provisioner "local-exec" {
command = "echo 'first' > newfile"
}
}
resource "null_resource" "second" {
triggers = {
order = null_resource.first.id
}
provisioner "local-exec" {
command = "echo 'second' >> newfile"
}
}
resource "null_resource" "third" {
triggers = {
order = null_resource.second.id
}
provisioner "local-exec" {
command = "echo 'third' >> newfile"
}
}
$ terraform apply
null_resource.first: Creating...
null_resource.first: Provisioning with 'local-exec'...
null_resource.first (local-exec): Executing: ["/bin/sh" "-c" "echo 'first' > newfile"]
null_resource.first: Creation complete after 0s [id=3107778766090269290]
null_resource.second: Creating...
null_resource.second: Provisioning with 'local-exec'...
null_resource.second (local-exec): Executing: ["/bin/sh" "-c" "echo 'second' >> newfile"]
null_resource.second: Creation complete after 0s [id=3159896803213063900]
null_resource.third: Creating...
null_resource.third: Provisioning with 'local-exec'...
null_resource.third (local-exec): Executing: ["/bin/sh" "-c" "echo 'third' >> newfile"]
null_resource.third: Creation complete after 0s [id=6959717123480445161]
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
To make sure, cat the new file and here's the output as expected
$ cat newfile
first
second
third

Resources