I have a module that creates all the infrastructure needed for a lambda including the ECR that stores the image:
resource "aws_ecr_repository" "image_storage" {
name = "${var.project}/${var.environment}/lambda"
image_tag_mutability = "MUTABLE"
image_scanning_configuration {
scan_on_push = true
}
}
resource "aws_lambda_function" "executable" {
function_name = var.function_name
image_uri = "${aws_ecr_repository.image_storage.repository_url}:latest"
package_type = "Image"
role = aws_iam_role.lambda.arn
}
The problem with this of course is that it fails because when aws_lambda_function runs the repository is there but the image is not: the image is uploaded using my CI/CD.
So this is a chicken egg problem. Terraform is supposed to only be used for infrastructure so I cannot/should not use it to upload an image (even a dummy one) but I cannot instantiate the infrastructure unless the image is uploaded in between repository and lambda creation steps.
The only solution I can think of is to create ECR separately from the lambda and then somehow link it as an existing aws resource in my lambda but that seems kind of clumsy.
Any suggestions?
I ended up using the following solution where a dummy image is uploaded as part resource creation.
resource "aws_ecr_repository" "listing" {
name = "myLambda"
image_tag_mutability = "MUTABLE"
image_scanning_configuration {
scan_on_push = true
}
provisioner "local-exec" {
command = <<-EOT
docker pull alpine
docker tag alpine dummy_container
docker push dummy_container
EOT
}
}
Building off #przemek-lach's answer plus #halloei's comment, I wanted to post a fully-working ECR repository that gets provisioned with a dummy image
data "aws_ecr_authorization_token" "token" {}
resource "aws_ecr_repository" "repository" {
name = "lambda-${local.name}-${local.environment}"
image_tag_mutability = "MUTABLE"
tags = local.common_tags
image_scanning_configuration {
scan_on_push = true
}
lifecycle {
ignore_changes = all
}
provisioner "local-exec" {
# This is a 1-time execution to put a dummy image into the ECR repo, so
# terraform provisioning works on the lambda function. Otherwise there is
# a chicken-egg scenario where the lambda can't be provisioned because no
# image exists in the ECR
command = <<EOF
docker login ${data.aws_ecr_authorization_token.token.proxy_endpoint} -u AWS -p ${data.aws_ecr_authorization_token.token.password}
docker pull alpine
docker tag alpine ${aws_ecr_repository.repository.repository_url}:SOME_TAG
docker push ${aws_ecr_repository.repository.repository_url}:SOME_TAG
EOF
}
}
Related
I have an ECR repository named workflow and in this repository, there is 5 image pushed using GitHub action.
Now I have a terraform workflow that will just use the image from ECR and using this ECR image builds the ECS container definition.
so now I want to fetch the latest image with the tag whatever it would be...
I tried the below thing
data "aws_ecr_repository" "example" {
name = "workflow"
}
and then
"image": "${data.aws_ecr_repository.example.repository_url}"
but here I only get the Url for the repo without a tag
so how can I pass here the latest or newest image with the tag?
As terraform is not capable for this thing and you want to use still terraform in you are workplace then you can use terraform as an external data source
resource "aws_ecs_task_definition" "snipe-main" {
container_definitions = <<TASK_DEFINITION
[
{
"image":"${data.aws_ecr_repository.example.repository_url}:${data.external.current_image.result["image_tag"]}"
}
]
TASK_DEFINITION
}
data "external" "current_image" {
program = ["bash", "./ecs-task.sh"]
}
output "get_new_tag" {
value = data.external.current_image.result["image_tag"]
}
cat ECS-task.sh
#!/bin/bash
set -e
imageTag=$(aws ecr describe-images --repository-name <<here your repo name>> --query 'sort_by(imageDetails,& imagePushedAt)[-1].imageTags[0]')
imageTag=`sed -e 's/^"//' -e 's/"$//' <<<"$imageTag"`
jq -n --arg imageTag "$imageTag" '{"image_tag":$imageTag}'
exit 0
I was looking for the same, look if this documentation suites you
https://registry.terraform.io/providers/hashicorp/aws/2.34.0/docs/data-sources/ecr_image
it includes a way to obtain the image:
data "aws_ecr_image" "service_image" {
repository_name = "my/service"
image_tag = "latest"
}
the problem of that is that "image_uri" isnt in the resource.
There is an open issue in Github about it:
https://github.com/hashicorp/terraform-provider-aws/pull/24526
Meanwhile you can use this format for the url:
"${var.aws_account_id}.dkr.ecr.${var.region}.amazonaws.com/${var.project_name}:${var.latest-Tag}"
I would like to generate ssh keys with a local-exec then read the content of the file.
resource "null_resource" "generate-ssh-keys-pair" {
provisioner "local-exec" {
command = <<EOT
ssh-keygen -t rsa -b 4096 -C "test" -P "" -f "testkey"
EOT
}
}
data "local_file" "public-key" {
depends_on = [null_resource.generate-ssh-keys-pair]
filename = "testkey.pub"
}
data "local_file" "private-key" {
depends_on = [null_resource.generate-ssh-keys-pair]
filename = "testkey"
}
terraform plan works but when I run the apply, I got error on testkey and testkey.pub don't exist.
Thanks
Instead of generating a file using an external command and then reading it in, I would suggest to use the Terraform tls provider to generate the key within Terraform itself, using tls_private_key:
terraform {
required_providers {
tls = {
source = "hashicorp/tls"
}
}
}
resource "tls_private_key" "example" {
algorithm = "RSA"
rsa_bits = 4096
}
The tls_private_key resource type exports two attributes that are equivalent to the two files you were intending to read in your example:
tls_private_key.example.private_key_pem: the private key in PEM format
tls_private_key.example.public_key_openssh: the public key in the format OpenSSH expects to find in .ssh/authorized_keys.
Please note the warning in the tls_private_key documentation that using this resource will cause the private key data to be saved in your Terraform state, and so you should protect that state data accordingly. That would also have been true for your approach of reading files from disk using data resources, because any value Terraform has available for use in expressions must always be stored in the state.
I run your code and there are no problems with it. It correctly generates testkey and testkey.pub.
So whatever causes it to fail for you, its not this snipped of code you've provided in the question. The fault must be outside the code snipped.
Generating an SSH key in Terraform is inherently insecure because it gets stored in the tfstate file, however I had a similar problem to solve and thought that the most secure/usable was to use a secret management service + using a cloud bucket for the backend:
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "4.3.0"
}
}
backend "gcs" {
bucket = "tfstatebucket"
prefix = "terraform/production"
}
}
//Import Ansible private key from Google Secrets Manager
resource "google_secret_manager_secret" "ansible_private_key" {
secret_id = var.ansible_private_key_secret_id
replication {
automatic = true
}
}
data "google_secret_manager_secret_version" "ansible_private_key"{
secret = google_secret_manager_secret.ansible_private_key.secret_id
version = var.ansible_private_key_secret_id_version_number
}
resource "local_file" "ansible_imported_local_private_key" {
sensitive_content = data.google_secret_manager_secret_version.ansible_private_key.secret_data
filename = var.ansible_imported_local_private_key_filename
file_permission = "0600"
}
In the case of GCP, I would add the secret in Google Secrets Manager, then use terraform import on the secret, which will in turn write it to the backend bucket. That way it doesn't get stored in Git as plain text, you can have the key file local to your project (.terraform shouldn't be under version control), and it's arguably more secure in the bucket.
So the workflow essentially is:
Human --> Secret Manager
Secret Manager --> Terraform Import --> GCS Bucket Backend
|--> Create .terraform/ssh_key
.terraform/ssh_key --> Terraform/Ansible/Whatever
Hashicorp Vault would be another way to address this
I am new to using Terraform/Azure and I have been trying to work on a automation job since few days, which has wasted a lot of my time as I couldn't find any solutions on internet for the same.
So, if anyone knows how to pull and deploy a docker image from an azure container registry using Terraform then please do share the details for the same, any assistance will be most appreciated.
A sample code snippet word be of great assistance.
You can use the docker provider and docker_image resource from terraform which pulls the image to your local docker registry. There are some Authentication options to authenticate to ACR.
provider "docker" {
host = "unix:///var/run/docker.sock"
registry_auth {
address = "<ACR_NAME>.azurecr.io"
username = "<DOCKER_USERNAME>"
password = "<DOCKER_PASSWORD>"
}
}
resource "docker_image" "my_image" {
name = "<ACR_NAME>.azurecr.io/<IMAGE>:<TAG>"
}
output "image_id" {
value = docker_image.my_image.name
}
Then use the image to deploy it with docker_container resource. I have not used this resource so I cannot give you a snippet, but there are some examples on the documentation.
One option to pull (and run) an image from ACR with Terraform is using Container Instances. Simple example for image reference:
resource "azurerm_container_group" "example" {
name = "my-continst"
location = "location"
resource_group_name = "name"
ip_address_type = "public"
dns_name_label = "aci-uniquename"
os_type = "Linux"
container {
name = "hello-world"
image = "acr-name.azurecr.io/aci-helloworld:v1"
cpu = "0.5"
memory = "1.5"
ports {
port = 80
protocol = "TCP"
}
}
image_registry_credential {
server = "acr-name.azurecr.io"
username = ""
password = ""
}
}
I was trying to do
terraform apply
but getting below error
1 error(s) occurred:
digitalocean_droplet.testvm[0]: Resource 'digitalocean_droplet.testvm' not found for variable
'digitalocean_droplet.testvm.ipv4_address'
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with any
resources that successfully completed. Please address the error above
and apply again to incrementally change your infrastructure.
How can I pass the public ip of the created droplet to provisioner local-exec command.
Below is my .tf file
provider "digitalocean" {
token = "----TOKEN----"
}
resource "digitalocean_droplet" "testvm" {
count = "10"
name = "do-instance-${count.index}"
image = "ubuntu-16-04-x64"
size = "512mb"
region = "nyc3"
ipv6 = true
private_networking = false
ssh_keys = [
"----SSH KEY----"
]
provisioner "local-exec" {
command = "fab production deploy ${digitalocean_droplet.testvm.ipv4_address}"
}
}
Thanks in advance!
For local-exec provisioner you can make use of the self keyword. In this case it would be {self.ipv4_address}.
My guess is that your snippet would've worked if you don't put count=10 in the testvm droplet. You can also make use of ${count.index}
More info: https://www.terraform.io/docs/provisioners/
Also, found this github issue that might be helpful to you.
Hope it helps
Let's assume we have some DO tags:
resource "digitalocean_tag" "foo" {
name = "foo"
}
resource "digitalocean_tag" "bar" {
name = "bar"
}
And we have configured swarm worker nodes with mentioned tags.
resource "digitalocean_droplet" "swarm_data_worker" {
name = "swarm-worker-${count.index}"
tags = [
"${digitalocean_tag.foo.id}",
"${digitalocean_tag.bar.id}"
]
// swarm node config stuff
provisioner "remote-exec" {
inline = [
"docker swarm join --token ${data.external.swarm_join_token.result.worker} ${digitalocean_droplet.swarm_manager.ipv4_address_private}:2377"
]
}
}
I want to label created swarm node with corresponding resource (droplet) tags.
To label worker nodes we need to run on the swarm master:
docker node update --label-add foo --label-add bar worker-node
How can we automate this with terraform?
Got it! Probably not the best way to solve the issue, but until Terraform with full swarm support not released can't find something better.
The main idea is to use pre-installed DO ssh key:
variable "public_key_path" {
description = "DigitalOcean public key"
default = "~/.ssh/hcmc_swarm/key.pub"
}
variable "do_key_name" {
description = "Name of the key on Digital Ocean"
default = "terraform"
}
resource "digitalocean_ssh_key" "default" {
name = "${var.do_key_name}"
public_key = "${file(var.public_key_path)}"
}
Then we can provision manager:
resource "digitalocean_droplet" "swarm_manager" {
...
ssh_keys = ["${digitalocean_ssh_key.default.id}"]
provisioner "remote-exec" {
inline = [
"docker swarm init --advertise-addr ${digitalocean_droplet.swarm_manager.ipv4_address_private}"
]
}
}
And after all we can connect to the swarm_manager via ssh after worker is ready:
# Docker swarm labels list
variable "swarm_data_worker__lables" {
type = "list"
default = ["type=data-worker"]
}
resource "digitalocean_droplet" "swarm_data_worker" {
...
provisioner "remote-exec" {
inline = [
"ssh -o StrictHostKeyChecking=no root#${digitalocean_droplet.swarm_manager.ipv4_address_private} docker node update --label-add ${join(" --label-add ", var.swarm_data_worker__lables)} ${self.name}",
]
}
}
Please, if you know a better approach to solve this issue, don't hesitate to point out via new answer or comment.