I am trying to use Terraform as part of my continuous deployment pipeline. I am using Terraform to create a snapshot of my production EBS volume (for backup purposes) prior to executing any other pipeline tasks.
I can get terraform to take the Snapshot, however the issue is Terraform will not create a new snapshot on each run. Instead it detects there is already an existing snapshot and does nothing.
For example.
Terraform Apply Execution 1 - Snapshot successfully taken.
Terraform Apply Execution 2 - No snapshot taken.
The code I am using for Terraform is provided below.
provider "aws" {
access_key = "..."
secret_key = "..."
region = "..."
}
resource "aws_ebs_snapshot" "example_snapshot" {
volume_id = "vol-xyz"
tags = {
Name = "continuous_deployment_backup"
}
}
Does anyone know how I can force Terraform to create a new EBS snapshot each time it is run?
To avoid any repetitive and manual tasks if you are working on a continuous deployment pipeline, an option could be to run CloudWatch Events rules according to a schedule automating Amazon EBS Snapshots.
You can check it out here in this tutorial suggested by AWS in its CloudWatch Documentation.
You can use Amazon Data Lifecycle Manager (Amazon DLM) to automate the creation, retention, and deletion of snapshots taken to back up your Amazon EBS volumes as well, always using terraform through the aws_dlm_lifecycle_policy resource for instance.
Related
I have a set of cloud run services created/maintained via terraform cloud.
When I create a new version, a github actions workflow pushes a new image to gcr.io.
Now in a normal scenario, I'd call:
gcloud run deploy auth-service --image gcr.io/riu-production/auth-service:latest
And a new version would be up. If I do this and the resource is managed by terraform, on the next run, terraform apply will fail saying it can't create that cloud run service due to a service with that name already existing. So it drifts apart in state and terraform no longer recognizes it.
A simple solution is to connect the pipeline to terraform cloud and run terraform apply -auto-approve for deployment purposes. That should work.
The problem with that is I really realy don't want to apply terraform commands in a pipeline, for now.
And the biggest one is I really would like to keep terraform out of the deployment process altogether.
Is there any way to force cloud run to take that new image for a service without messing up the terraform infrastructure?
Cloud run configs:
resource "google_cloud_run_service" "auth-service" {
name = "auth-service"
location = var.gcp_region
project = var.gcp_project
template {
spec {
service_account_name = module.cloudrun-sa.email
containers {
image = "gcr.io/${var.gcp_project}/auth-service:latest"
}
}
}
traffic {
percent = 100
latest_revision = true
}
}
In theory yes it should be possible ...
But I would recommend against that, you should be doing terraform apply on every deployment to guarantee the infrastructure is as expected.
Here are some things you can try:
Keep track of when it changes and use the import on that resource:
https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/cloud_run_service#import
Look into lifecycle ignore, you can ignore the attribute that triggers the change:
https://www.terraform.io/language/meta-arguments/lifecycle#ignore_changes
I am new to Terraform, so I am looking for some advice.
I need to deploy 30+ AWS Glue jobs (Python) using Terraform which will be executed by a Jenkins pipeline.
Looking at Terraform documentation, creating a single AWS Glue is pretty straight forward.
resource "aws_glue_job" "example" {
name = "example"
role_arn = aws_iam_role.example.arn
command {
script_location = "s3://${aws_s3_bucket.example.bucket}/example.py"
}
}
How can I take this example and deploy 30+ jobs using a single Terraform script. Ideally, I could maintain a "manifest" file that includes entries for job-names, script location, etc. and somehow loop through it. But I am open to suggestions.
Does terraform support aws backup feature for to restore the image from vault (https://www.terraform.io/docs/providers/aws/r/backup_plan.html )?
As I read the document I can see that it does support creating of backup plan, assigning resources and policy, creating vault but doesnot support restore of an image or ebs volume
How do i add the restore block in my terraform template
Terraform's execution model is designed for translating declarative descriptions of an intended state into imperative actions to reach that state automatically, and so its model doesn't really support "exceptional" processes like restoring backups.
However, you can develop a process for restoring backups alongside Terraform whereby the main restore action is done using the AWS Console, AWS CLI, or API in your own automation, and then you inform Terraform after the fact that it should use the restored object via its state manipulation commands.
For example, if you have an EBS volume managed by Terraform using an aws_ebs_volume resource, you might also use Terraform to configure an AWS Backup plan for that volume, and then backups will be created automatically as per your plan.
In the exceptional situation where your existing volume is lost or corrupted and you want to restore the backup, the person responding to the incident can follow the following process:
Create an AWS Backup restore job either using the AWS Console, the AWS CLI, or some software of your own design using the AWS Backup API.
Once the backup job is complete, consult the CreatedResourceARN to find the id if the new object that was created by restoring the backup. In the case of an EBS volume, this will be the final part of the after the :volume/ separator.
Tell Terraform to "forget" the existing EBS volume object that is now destroyed or damaged:
terraform state rm aws_ebs_volume.example
Tell Terraform to import the object created by restoring the backup as the new remote object associated with the Terraform resource:
terraform import aws_ebs_volume.example vol-049df61146c4d7901
If your old EBS volume is still present but corrupted or otherwise damaged, the final step would be to locate and manually destroy the remant of it, because Terraform is no longer managing it and therefore it would otherwise be left in place forever.
After this process is complete, Terraform will consider the new object to be the one managed by that resource, and you can use Terraform as normal with that resource moving forward. The same principle applies to any of the object types supported by AWS Backup, as long as they have a resource type in the AWS provider that supports terraform import.
I'm new to Terraform and I'm trying to create my first resource.
The provider is AWS and the provider download completed
I have run terraform init and that has completed.
However when I try to run terraform plan it tells me nothing in my infrastructure will change
provider "aws" {
access_key = "I input my key here"
secret_key = " I input my key here"
region = "us-east-1"
}
resource "aws_instance" "Server1" {
ami = "ami-0ea83ef2bc1efef82"
instance_type = "t2.micro"
}
And that is correct.
"terraform plan" will just create execution plan, but it will not execute anything!
The terraform plan command is used to create an execution plan. Terraform performs a refresh, unless explicitly disabled, and then determines what actions are necessary to achieve the desired state specified in the configuration files.
This command is a convenient way to check whether the execution plan for a set of changes matches your expectations without making any changes to real resources or to the state
Terraform Plan
Now, post "terraform plan", what you have to do to create an AWS instance is hit "terraform apply"
"terraform apply" will pick the plan generated by "terraform plan" and will execute it on the provider mentioned. If its execution is successful, an EC2 instance will be created.
The terraform apply command is used to apply the changes required to reach the desired state of the configuration, or the pre-determined set of actions generated by a terraform plan execution plan.
Terraform Apply
save the code and then
terraform init
and then terraform plan and apply!
Being able to capture infrastructure in a single Terraform file has obvious benefits. However, I am not clear in my mind how - once, for example, a virtual machine has been created - subsequent updates are handled.
So, to provide a specific scenario. Suppose that using Terraform we set up an Azure vm with SQL Server 2014. Then, after a month we decide that we should like to update that vm with the latest service pack for SQL Server 2014 that has just been released.
Is the recommended practice that we update the Terraform configuration file and re-apply it?
I have to disagree with the other two responses. Terraform can handle infrastructure updates just fine. The key thing to understand, however, is that Terraform largely follows an immutable infrastructure paradigm, which means that to "update" a resource, you delete the old resource and create a new one to replace it. This is much like functional programming, where variables are immutable, and to "update" something, you actually create a new variable.
The typical pattern with Terraform is to use it to deploy a server image, such as an Virtual Machine (VM) Image (e.g. an Amazon Machine Image (AMI)) or a Container Image (e.g. a Docker Image). When you want to "update" something, you create a new version of your image, deploy that onto a new server, and undeploy the old server.
Here's an example of how that works:
Imagine that you're building a Ruby on Rails app. You get the app working in dev and it's time to deploy to prod. The first step is to package the app as an AMI. You could do this using a tool like Packer. Now you have an AMI with id ami-1234.
Here is a Terraform template you could use to deploy this AMI on a server (an EC2 Instance) in AWS with an Elastic IP Address attached to it:
resource "aws_instance" "example" {
ami = "ami-1234"
instance_type = "t2.micro"
}
resource "aws_eip" "example" {
instance = "${aws_instance.example.id}"
}
When you run terraform apply, Terraform deploys the server, attaches an IP address to it, and now when users visit that IP, they will see v1 of your Rails app.
Some time later, you update your Rails app and want to deploy the new version, v2. To do that, you build a new AMI (i.e. you run Packer again) to get an ami with ID "ami-5678". You update your Terraform templates accordingly:
resource "aws_instance" "example" {
ami = "ami-5678"
instance_type = "t2.micro"
}
When you run terraform apply, Terraform undeploys the old server (which it can find because Terraform records the state of your infrastructure), deploys a new server with the new AMI, and now users will see v2 of your code at that same IP.
Of course, there is one problem here: in between the time when Terraform undeploys v1 and when it deploys v2, your users would see downtime. To work around that, you could use Terraform's create_before_destroy lifecycle setting:
resource "aws_instance" "example" {
ami = "ami-5678"
instance_type = "t2.micro"
lifecycle {
create_before_destroy = true
}
}
With create_before_destroy set to true, Terraform will create the replacement server first, switch the IP to it, and then remove the old server. This allows you to do zero-downtime deployment with immutable infrastructure (note: zero-downtime deployment works better with a load balancer that can do health checks than a simple IP address, especially if your server takes a long time to boot).
For more information on this, check out the book Terraform: Up & Running. The code samples for the book include an example of a zero-downtime deployment with a cluster of servers and a load balancer: https://github.com/brikis98/terraform-up-and-running-code
Terraform is an infrastructure provision tool, th configuration/deployment tools will be:
chef
saltstack
ansible
etc.,
As I am working with chef, so basically, I provision the server instance by terraform, then terraform (terraform provisioner) handles the control to chef for system configuration and deployment.
For the moment, terraform cannot delete the node/client in chef server, so after you terraform destroy, you need remove them by yourself.
Terraform isn't best placed for this sort of task. Terraform is an infrastructure management tool, not configuration management.
You should use tools such as chef, puppet, and ansible to deal with the configuration of the system.
If you must use terraform for this task; you could create a template_file resource and place in the configuration required to install the SQL server, and how to upgrade if a different version is presented. Reference: here
Put that code inside a provisioner under the null_resource resource. reference: here.
The trigger for this could be the variable containing the SQL version. So, when you present a different version of SQL it'll execute that provisioner on each instance to upgrade the versions.