some existing resources will be re-created if parameters are changed, one example is ebs_block_device which will even re-create the EC2 instance on AWS if you change e.g. the volume_size parameter.
is there a list of such Terraform resources/parameters so we use them carefully?
In Terraform's model, the decision about whether a particular change can be made using an in-place update or whether it requires replacing the whole object is made dynamically by the provider during the planning step.
Unfortunately, that means that there isn't any way to systematically enumerate all of the kinds of changes that might cause each of those results: those rules are represented as executable code inside the provider plugin, rather than as declarative metadata in the static provider schema (which you can see with the terraform providers schema command).
Although it's true that in many cases any change to a particular argument will require replacement, Terraform is designed to allow providers to implement more specific rules if necessary, such as when a remote system allows upgrading a database engine to a newer version in-place but requires replacement to downgrade the same database engine. A provider implements that during the planning step for that resource type, by comparing the previous value with the new value to determine whether the new value is "older" than the previous value, following whatever rules the remote system requires.
Because of that, the only totally-reliable way to determine whether a particular change will require replacement is to try making the change and run terraform plan to see how the provider proposes to implement the change.
Sometimes provider developers include details in their own documentation about which changes will require replacement, but in other cases they will assume that you're familiar with the behavior of the underlying API and rely on that API's own documentation to learn about what can be changed with an in-place update.
Related
Due to some accident my .tfstate got deleted and current situation is I have all the resources up and live and also have all the terraform code for those resources. However, whats not there is the state.
I am aware of terraform import but reading documentation what I understood that you have to individually specify resource names to import it. This will be too tedious or impractical since the number of resources are high.The backend being used it azurerm.
My questions is, is there a way I can import existing resource using a filter such as tags or name pattern?
I am aware of third party tools such as terraformer but I am looking for a more standard and fool proof way to do it, since the infrastructure is of critical nature.
Terraformer is a tool that supports large scale import jobs by pattern. It explicitly supports most major and minor cloud providers, including azure.
I am working on unit testing for terraform. For some modules, I have to authorized into AWS to be able to retrieve terraform data source. Is there anyway to mock or override data source for something like below?
data "aws_region" "current" {
}
Thank you in advance.
Terraform does not include any built-in means to mock the behavior of a provider. Module authors generally test their modules using integration testing rather than unit testing, e.g. by writing a testing-only Terraform configuration that calls the module in question with suitable arguments to exercise the behaviors the author wishes to test.
The testing process is then to run terraform apply within that test configuration and observe it making the intended changes. Once you are finished testing you can run terraform destroy to clean up the temporary infrastructure that the test configuration declared.
A typical Terraform module doesn't have much useful behavior in itself and instead is just a wrapper around provider behaviors, so integration testing is often a more practical approach than unit testing in order to achieve confidence that the module will behave as expected in real use.
If you particularly want to unit test a module, I think the best way to achieve your goal within the Terraform language itself is to think about working at the module level of abstraction rather than the resource level of abstraction. You can then use Module Composition techniques, like dependency inversion, so that you can pass your module fake input when you are testing it and real input when it's being used in a "real" configuration. The module itself would therefore no longer depend directly on the aws_region data source.
However, it's unlikely that you'd be able to achieve unit testing in the purest sense with the Terraform language alone unless the module you are testing consists only of local computation (locals and output blocks, and local-compute-only resources) and doesn't interact with any remote systems at all. While you could certainly make a Terraform module that takes an AWS region as an argument, there's little the module could do with that value unless it is also interacting with the AWS provider.
A more extreme alternative would be to write your own aws provider that contains the subset of resource type names you want to test with but whose implementations of them are all fake. You could then use your own fake aws provider instead of the real one when you're running your tests, and thus avoid interacting with real AWS APIs at all.
This path is considerably more work of course, and so I would suggest to embark on it only if the value of unit testing your particular module(s) is high.
Another super-labour-intensive solution would be to emulate aws api on localhost and redirect (normal) aws provider there. I've found https://github.com/localstack/localstack - https://docs.localstack.cloud/integrations/terraform/ may probably be helpful with this.
I am trying to build a Terraform Provider and there is a field from the external API that can return a type of list or string. What would be the best way of defining the schema for an API with this behavior.
I read through the Terraform Provider docs: https://www.terraform.io/docs/extend/schemas/schema-types.html and was unable to find a way to solving this.
At the time of writing, there is no way to achieve this with the Terraform SDK. The SDK is currently supporting the common features of both Terraform 0.11 and 0.12 in order to retain 0.11 compatibility, and since dynamic types were introduced only in Terraform 0.12 that feature is not yet available in the SDK.
A common workaround for providers at the moment has been to define the attribute in question as being a string and add _json to the end of the name, and then have the provider write a JSON-encoded value into the attribute. The calling configuration can then use jsondecode to extract the value.
The reason for the _json suffix here is that it's planned to support dynamic typing like this in a future SDK revision (once Terraform 0.11 compatibility is dropped) and so having a name like foo_json today leaves the name foo available for use later in a deprecation cycle moving away from foo_json.
Internally Terraform 0.12's provider protocol does already support this possibility by marking a particular attribute as having a dynamic type. The SDK just doesn't have an interface to select that, for the reasons given above. In principle it is therefore possible to write a provider directly against the raw protocol which would support only Terraform 0.12 and could use dynamic typing, but that's a lot of work and is unprecedented so far outside of prototypes and experiments. I would not recommend it, but am mentioning it only for completeness.
I would like to use Terraform programmatically like an API/function calls to create and teardown infrastructure in multiple specific steps. e.g reserve a couple of eips, add an instance to a region and assign one of the IPs all in separate steps. Terraform will currently run locally and not on a server.
I would like to know if there is a recommended way/best practices for creating the configuration to support this? So far it seems that my options are:
Properly define input/output, heavily rely on resource separation, modules, the count parameter and interpolation.
Generate the configuration files as JSON which appears to be less common
Thanks!
Instead of using Terraform directly, I would recommend a 3rd party build/deploy tool such as Jenkins, Bamboo, Travis CI, etc. to manage the release of your infrastructure managed by Terraform. Reason being is that you should treat your Terraform code in the exact same manner as you would application code (i.e. have a proper build/release pipeline). As an added bonus, these tools come integrated with a standard api that can be used to execute your build and deploy processes.
If you choose not to create a build/deploy pipeline, your other options are to use a tool such as RunDeck which allows you to execute arbitrary commands on a server. It also has the added bonus of having a excellent privilege control system to only allow specified users to execute commands. Your other option could be to upgrade from the Open Source version of Terraform to the Pro/Premium version. This version includes an integrated GUI and extensive API.
As for best practices for using an API to automate creation/teardown of your infrastructure with Terraform, the best practices are the same regardless of what tools you are using. You mentioned some good practices such as clearly defining input/output and creating a separation of concerns which are excellent practices! Some others I can recommend are:
Create all of your infrastructure code with idempotency in mind.
Use modules to separate the common shared portions of your code. This reduces the number of places that you will have to update code and therefore the number of points of error when pushing an update.
Write your code with scalability in mind from the beginning. It is much simpler to start with this than to adjust later on when it is too late.
We are implementing a puppet module for a storage subsystem. We are implementing our own types and providers and we will have types like volume, host etc related to the storage subsystem.
We have made our types ensurable and creation and deletion are working fine.
Our question is, how to implement the modification of an existing resource?
Suppose a volume resource has been created and now I want to change the expiration hours of the volume, how do I implement this in my provider?
Is it by creating a new ensure value like modify or is there some other way?
how to implement the modification of an existing resource? Suppose a
volume resource has been created and now I want to change the
expiration hours of the volume, how do I implement this in my
provider? Is it by creating a new ensure value like modify or is there
some other way?
No, you do not create a special ensure value. That would be hard to work with, because it would require that your manifests be aware of whether the resource needs to be created. Remember always that your manifests describe the target state of each resource, irrespective (to a first approximation) of their current state or even whether they exist.
The custom type documentation is a little vague here, however, because the implementation is basically open. You can do whatever makes sense for you. But there are two particularly common models:
the provider's property setter methods (also) modify the physical resource's properties if they are out of sync, on a property-by-property basis.
the provider implements flushing, so resource properties are synchronized with the system directly or indirectly by the provider's flush method