How to add ECS attributes to an instance using terraform - terraform

I heavily use ECS Attributes on our containerized infrastructure. I couldn't find terraform docs to achieve this. Do I need to execute aws cli commands manually to apply those attributes after creating the infrastructure?

I'd recommend having the ECS agent set the ECS attributes if you need these.
You can do this by adding ECS_INSTANCE_ATTRIBUTES to the /etc/ecs/ecs.config file or passing them as an environment variable directly to the ECS agent on startup.
If you have a "base" ECS AMI (either one you rolled your own or the Amazon Linux AMI) then you probably just want to use user data to dynamically set this from Terraform.

You can use "aws_ecs_service" resource and add attributes. For example:
placement_constraints {
type = "memberOf"
expression = "attribute:ecs.availability-zone in [us-west-2a, us-west-2b]"}

Related

Terraform doesn't have a Ground Truth resource. How do I create my own resource?

As far as I can tell, terraform doesn't have any support for Sagemaker Ground Truth. However AWS CLI does support it.
I don't want to create a whole new provider as a plugin, especially as this falls under aws.
How do I create my own resource within the existing aws provider?
You have a couple options here (and in general, when something isn't supported by the Terraform AWS provider).
If the resource in question is supported by CloudFormation, you can use the aws_cloudformation_stack Terraform resource to create a custom CloudFormation stack that creates and tracks the state of the resource. Here's the CloudFormation documentation for SageMaker; see if you can find the resource you want in there anywhere.
If it's only supported by the CLI (not by CloudFormation), you can use the CLI in your Terraform configuration. This is the module I like to use for doing CLI work in Terraform. The downside is that you must have the AWS CLI installed on whatever machine you're doing the terraform apply on.

How can I apply one of my resources by name in Terraform?

Terraform will try to deploy all resources defined on Terraform configuration files. There are a lot of resources in my application, like lmabda, api gateway, ECS etc. I wonder whether I can specify deploying only one resource. For example, I want to deploy one lambda only and don't want to apply other resources. How can I make it in Terraform?
terraform apply -target=aws_lambda_function.test_function
More information on the usage of -target can be found in the terraform apply documentation.

aws_emr_cluster - is it possible to retrieve the instance identifiers

I am creating an EMR cluster using the aws_emr_cluster resource in terraform.
I need to get access to the instance ID of the underlying EC2 hardware, specifically the MASTER node.
It does not appear in the attributes and neither when I perform an terraform show
The data definitely exists and is available in AWS.
Does anyone know how I can get at this value and how to do it it using terraform?
You won't be able to access the nodes (EC2 Instances) in an EMR Cluster through terraform. It is the same case for AutoScaling Groups too.
If terraform includes EMR or ASG nodes, state file will be changed everytime a change happens in EMR/ASG. So, storing the instance information won't be ideal for terraform.
Instead, you can use AWS SDK/CLI/boto3 to see them.
Thanks.

Terraform : how to dynamically create microservices with ECS?

I am stuck with terraform. I want to create dynamically ECS services with terraform.
I have a configuration like this :
module/cluster/cluster.tf
module/service/service.tf
What I want to do is inject the service name from jenkins into the terraform configuration, so if the service doesnt exist, it creates it (update it if it exists)
I tried to set up different backend s3 remote state but I don't manage to build the whole infrastructure in one terraform apple.
Is there any way to specify dynamically the service configuration so its create them on demand ?
terraform supports to use variable TF_VAR_<variable> to do on fly change.
From environment variables
Terraform will read environment variables in the form of TF_VAR_name to find the value for a variable. For example, the TF_VAR_access_key variable can be set to set the access_key variable.
Note: Environment variables can only populate string-type variables. List and map type variables must be populated via one of the other mechanisms.
For example,
TF_VAR_environment=development terraform plan
https://www.terraform.io/intro/getting-started/variables.html#from-environment-variables

Terraform and Updates

Being able to capture infrastructure in a single Terraform file has obvious benefits. However, I am not clear in my mind how - once, for example, a virtual machine has been created - subsequent updates are handled.
So, to provide a specific scenario. Suppose that using Terraform we set up an Azure vm with SQL Server 2014. Then, after a month we decide that we should like to update that vm with the latest service pack for SQL Server 2014 that has just been released.
Is the recommended practice that we update the Terraform configuration file and re-apply it?
I have to disagree with the other two responses. Terraform can handle infrastructure updates just fine. The key thing to understand, however, is that Terraform largely follows an immutable infrastructure paradigm, which means that to "update" a resource, you delete the old resource and create a new one to replace it. This is much like functional programming, where variables are immutable, and to "update" something, you actually create a new variable.
The typical pattern with Terraform is to use it to deploy a server image, such as an Virtual Machine (VM) Image (e.g. an Amazon Machine Image (AMI)) or a Container Image (e.g. a Docker Image). When you want to "update" something, you create a new version of your image, deploy that onto a new server, and undeploy the old server.
Here's an example of how that works:
Imagine that you're building a Ruby on Rails app. You get the app working in dev and it's time to deploy to prod. The first step is to package the app as an AMI. You could do this using a tool like Packer. Now you have an AMI with id ami-1234.
Here is a Terraform template you could use to deploy this AMI on a server (an EC2 Instance) in AWS with an Elastic IP Address attached to it:
resource "aws_instance" "example" {
ami = "ami-1234"
instance_type = "t2.micro"
}
resource "aws_eip" "example" {
instance = "${aws_instance.example.id}"
}
When you run terraform apply, Terraform deploys the server, attaches an IP address to it, and now when users visit that IP, they will see v1 of your Rails app.
Some time later, you update your Rails app and want to deploy the new version, v2. To do that, you build a new AMI (i.e. you run Packer again) to get an ami with ID "ami-5678". You update your Terraform templates accordingly:
resource "aws_instance" "example" {
ami = "ami-5678"
instance_type = "t2.micro"
}
When you run terraform apply, Terraform undeploys the old server (which it can find because Terraform records the state of your infrastructure), deploys a new server with the new AMI, and now users will see v2 of your code at that same IP.
Of course, there is one problem here: in between the time when Terraform undeploys v1 and when it deploys v2, your users would see downtime. To work around that, you could use Terraform's create_before_destroy lifecycle setting:
resource "aws_instance" "example" {
ami = "ami-5678"
instance_type = "t2.micro"
lifecycle {
create_before_destroy = true
}
}
With create_before_destroy set to true, Terraform will create the replacement server first, switch the IP to it, and then remove the old server. This allows you to do zero-downtime deployment with immutable infrastructure (note: zero-downtime deployment works better with a load balancer that can do health checks than a simple IP address, especially if your server takes a long time to boot).
For more information on this, check out the book Terraform: Up & Running. The code samples for the book include an example of a zero-downtime deployment with a cluster of servers and a load balancer: https://github.com/brikis98/terraform-up-and-running-code
Terraform is an infrastructure provision tool, th configuration/deployment tools will be:
chef
saltstack
ansible
etc.,
As I am working with chef, so basically, I provision the server instance by terraform, then terraform (terraform provisioner) handles the control to chef for system configuration and deployment.
For the moment, terraform cannot delete the node/client in chef server, so after you terraform destroy, you need remove them by yourself.
Terraform isn't best placed for this sort of task. Terraform is an infrastructure management tool, not configuration management.
You should use tools such as chef, puppet, and ansible to deal with the configuration of the system.
If you must use terraform for this task; you could create a template_file resource and place in the configuration required to install the SQL server, and how to upgrade if a different version is presented. Reference: here
Put that code inside a provisioner under the null_resource resource. reference: here.
The trigger for this could be the variable containing the SQL version. So, when you present a different version of SQL it'll execute that provisioner on each instance to upgrade the versions.

Resources