I've come across some AWS resources that were not created through my terraform configuration that I later realized I need to update. An example is cloudwatch logs where in my initial config (like lambda, db, etc...) didn't have any specification to create them. however, later if I want to set any config for the logs Im having trouble adding those resources to my config. I believe I need to do a terraform import for those resources but it essentially requires me to issue that command before the terraform apply.
This isn't really that clean if I have a process where I can do only one command (terraform apply).
Any suggestions to manage the terraform import as part of the config only? Like an import if not available.
Related
I am trying to use Terraformer ( https://github.com/GoogleCloudPlatform/terraformer ) to import some of our legacy into Terraform for ease of maintainability going forward. I keep running into a few issues though.
Current State - I am on Terraform version 1.1.7 with a remote backend configuration on S3. The desired end state is to use Terraformer to generate configurations for the legacy infrastructure, and include them as a separate module in the backend remote state.
Problem(s) -
Terraformer only seems to work with Terraform version 0.21.31, and the state files it generates also confides in this. I did some lookup and the standard suggestion here was to use https://github.com/tfutils/tfenv and have multiple Terraform versions. However it seems that TF 0.21.31 does not support provider plugin sources, which means I can't install the Hashicorp AWS provider plugin, without which, Terraformer can't work. So I am stuck in a deadlock here. Curious to know how others solved it?
Assuming I do somehow manage to solve the version issue, Terraformer seems to generate a local state file. Now, since my remote backend is on S3, how do I merge this local state file to the remote backend state file, so that Terraform realises that I am just planning to import some infra, and not re-create them?
Any help here would be much appreciated. Thanks in advance!
Please let me know if we need some more additional info for a better prognosis!
I'm a dev on QHub which is a collection of templatized terraform scripts to deploy a data science platform on various cloud providers (AWS, GCP, Azure, Digital Ocean.) It uses S3 buckets to store the terraform state, and as part deployment we need to import that state if it is available. In the case when the infrastructure is initially deployed, or if it is being deployed using git-ops, the state files don't exist locally. This results in the requirement for a "import if exists, otherwise continue" functionality which I'm having difficulty finding.
Previously we would run terraform import and if the bucket didn't exist the command would fail silently and continue. With the upgrade to terraform v1.0.5 the import command will hang until terraform times out, which is too long.
My current workaround is to use the GNU default command timeout and force the command to fail like so: timeout 10 terraform import module.terraform-state.module.gcs.google_storage_bucket.static-site test-dev-terraform-state. If this fails I will then run terraform apply to create the resource.
There is the additional requirement of avoiding the use of cloud-specific CLI tools to determine if the bucket exists.
There has to be a better way to do this. Any thoughts?
I'm new to this terraform world and I've been assigned into the task of creating many configurations to azure with it.
I'm developing a main.tf script (which creates some resources, like resource group, vnets, kubernetes cluster, app services, etc.) and while coding it and executing
Terraform apply, it seems to only apply what changed doing in fact updates.
Then we deleted the resource group the script created and a colegue of mine had to run the same script with terraform creating a resource group with another name since i didn't had a required permission, after that, if i run the command Terraform apply it fails and gives errors, that say that the resource cannot be created because it already exists.
After reading some documentation i found that it might be because of the state
https://www.terraform.io/docs/state/index.html
Is the update of a script something that only works for each session of terraform?
Even doing a Terraform refresh doesn't seem to work.
Or probably I'm just mistaking and there is no way to update some resources.
EDIT: for some reason the state file that was on the storage only had a few things, the solution was to delete everything and create again.
For the new resources, there is nothing more, the Terraform script helps you create the resources you set in the script.
For the existing resources, when you make changes in the script that you already deployed via the Terraform, then it will check the state file to make sure what changes the resources should update. If there is no state file ( or you delete it), then it will deploy the Terraform script directly, but if any resources you want to deploy already exists, then it will fail due to the existing resources. And the command terraform refresh just updates the last state of the resources in the Terraform script that you already deployed. If the deployment failed and the state file has no resources in it, then refresh is not useful.
If someone else ran terraform apply for you because you didn't have access, and now you want to modify that terraform and run it yourself, you need to get the state file that was generated when that other person ran it. You absolutely have to maintain the Terraform state file somewhere, so that it can be accessed on subsequent runs. You should really configure a Terraform backend, instead of using local state files.
You need to be aware that Terraform stores everything it does in the state file, and refers to that file before every run. A terraform refresh only tells Terraform to refresh the state of the things that are in the state file, it doesn't rebuild the state file from scratch. Understanding Terraform state files is so fundamental to the use of Terraform that you really need to understand this before using it.
Background:
I have a shared module called "releases". releases contains the following resources:
aws_s3_bucket.my_deployment_bucket
aws_iam_role.my_role
aws_iam_role_policy.my_role_policy
aws_iam_instance_profile.my_instance_profile
These resources are used by ec2 instances belonging to an ASG to pull code deployments when they provision themselves. The release resources are created once and will rarely/never change. This module is one of a handful used inside an environment-specific project (qa-static) that has it's own tfstate file in AWS.
Fast Forward: It's now time to create a "prd-static" project. This project wants to re-use the environment agnostic AWS resources defined in the releases module. prd-static is basically a copy of qa with beefed up configuration for the database and cache server, etc.
The Problem:
prd-static sees the environment-agnostic AWS resources defined in the "releases" module as new resources that don't exist in AWS yet. An init and plan call shows that it wants to create these from scratch. It makes sense to me since prd-static has it's own tfstate - and tfstate is essentially the system-of-record - that terraform doesn't know that no changes should be applied. But, ideally terraform would use AWS as the source of truth for existing resources and their configuration.
If we try to apply the plan as is, the prd-static project just bombs out with an Entity Already Exists error. Leading me to this post:
what is the best way to solve EntityAlreadyExists error in terraform?
^-- logically I could import these resources into the tfstate file for prd-static and be on my merry way. Then, both projects know about the resources and in theory would only apply updates if the configuration had changed. I was able to import the bucket and the role and then re-run the plan.
Now terraform wants to delete the s3 bucket and recreate the role. That's weird - and not at all what I wanted to do.
TLDR: It appears that while modules like to be shared, modules that create single re-usable resources (like an S3 bucket) really don't want to be shared. It looks like I need to pull the environment-agnostic static resources module into it's own project with it's own tfstate that can be used independently rather than try and share the releases module across environments. Environment-specific stuff that depend on the release resources can reference them via their outputs in my build-process.
Should I be able to define a resource in a module, like an S3 bucket where the same instance is used across terraform projects that each have their own tfstate file (remote state in S3). Because I cannot.
If I really shouldn't be able to do this is the correct approach to extract the single instance stuff into its own project and depend on the outputs?
I want to run a script using terraform inside an existing instance on any cloud which is pre-created .The instance was created manually , is there any way to push my script to this instance and run it using terraform ?
if yes ,then How can i connect to the instance using terraform and push my script and run it ?
I believe ansible is a better option to achieve this easily.
Refer the example give here -
https://docs.ansible.com/ansible/latest/modules/script_module.html
Create a .tf file and describe your already existing resource (e.g. VM) there
Import existing thing using terraform import
If this is a VM then add your script to remote machine using file provisioner and run it using remote-exec - both steps are described in Terraform file, no manual changes needed
Run terraform plan to see if expected changes are ok, then terraform apply if plan was fine
Terraform's core mission is to create, update, and destroy long-lived infrastructure objects. It is not generally concerned with the software running in the compute instances it deploys. Instead, it generally expects each object it is deploying to behave as a sort of specialized "appliance", either by being a managed service provided by your cloud vendor or because you've prepared your own machine image outside of Terraform that is designed to launch the relevant workload immediately when the system boots. Terraform then just provides the system with any configuration information required to find and interact with the surrounding infrastructure.
A less-ideal way to work with Terraform is to use its provisioners feature to do late customization of an image just after it's created, but that's considered to be a last resort because Terraform's lifecycle is not designed to include strong support for such a workflow, and it will tend to require a lot more coupling between your main system and its orchestration layer.
Terraform has no mechanism intended for pushing arbitrary files into existing virtual machines. If your virtual machines need ongoing configuration maintenence after they've been created (by Terraform or otherwise) then that's a use-case for traditional configuration management software such as Ansible, Chef, Puppet, etc, rather than for Terraform.